Advancing Assessment for Student Success : Supporting Learning by Creating Connections Across Assessment, Teaching, Curriculum, and Cocurriculum in Collaboration with Our Colleagues and Our Students 9781000971774, 9781620368701

This book is about student success and how to support and improve it. It takes as its point of departure that we--as fac

147 64 13MB

English Pages 252 [254] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advancing Assessment for Student Success : Supporting Learning by Creating Connections Across Assessment, Teaching, Curriculum, and Cocurriculum in Collaboration with Our Colleagues and Our Students
 9781000971774, 9781620368701

Citation preview

— LAURIE G. DODGE, Vice Chancellor of Institutional Assessment and Planning at Brandman University, and Founding Chair of the Board of Directors for C-BEN

—KEVIN THOMAS GRANT, Director of Student Affairs Assessment and Research, California Polytechnic University “Whether a faculty member, academic administrator, staff, or student; whether new to assessment or someone who has been involved in assessment for years—whatever role you might play within an institution of higher education, this book is a breath of fresh air that provides a revitalized pathway to ensure that assessment processes and practices are learner-centered and collaboratively driven conversations on educational design. What a true delight to read this book!” —NATASHA A. JANKOWSKI, Former Executive Director of the National Institute for Learning Outcomes Assessment

THE AUTHORS AMY DRISCOLL is a national and international consultant on assessment, and formerly the founding director of teaching, learning, and assessment (retired) at California State University, Monterey Bay (CSUMB). NELSON GRAFF is professor and director of Communication Across the Disciplines at CSUMB, supporting faculty in teaching reading and writing in their classes. DAN SHAPIRO is currently interim associate vice president for academic programs and dean of University College and Graduate Studies at CSUMB. SWARUP WOOD is professor of chemistry and currently serves as interim director of general education and coordinator of the First Year Seminar at CSUMB.

HIGHER EDUCATION | ASSESSMENT TEACHING | STUDENT AFFAIRS

ADVANCING ASSESSMENT FOR STUDENT SUCCESS

“The field of assessment continues to form bridges between student affairs and academic affairs as we collaboratively focus on student success. Assessment and learning outcomes ultimately speak to the student story, their journey toward personal and professional growth. Our coauthors speak from their tremendous professional experience and truly genuine heart for all students, and I, for one, think that is a beautiful thing.”

DRISCOLL / GRAFF / SHAPIRO / WOOD

“With a rich storytelling and an evidence-based approach, the authors integrate assessment, teaching, and curriculum through illustrative institutional case studies. Meaningful key aspects of the book include equity-based assessment, inclusion of student voice, and advancing reflection. The friendly, down-to-earth presentation allows the reader to apply tailored effective strategies in meeting cultural and institutional needs in advancing assessment.”

AMY DRISCOLL, NELSON GRAFF, DAN SHAPIRO, AND SWARUP WOOD

ADVANCING ASSESSMENT FOR STUDENT SUCCESS SUPPORTING LEARNING BY CREATING CONNECTIONS ACROSS ASSESSMENT, TEACHING, CURRICULUM, AND COCURRICULUM IN COLLABORATION WITH OUR COLLEAGUES AND OUR STUDENTS

22883 Quicksilver Drive Sterling, VA 20166-2019 www.Styluspub.com

“MOST IMPORTANT, THIS BOOK HUMANIZES ASSESSMENT.”—PEGGY L. MAKI

— LAURIE G. DODGE, Vice Chancellor of Institutional Assessment and Planning at Brandman University, and Founding Chair of the Board of Directors for C-BEN

—KEVIN THOMAS GRANT, Director of Student Affairs Assessment and Research, California Polytechnic University “Whether a faculty member, academic administrator, staff, or student; whether new to assessment or someone who has been involved in assessment for years—whatever role you might play within an institution of higher education, this book is a breath of fresh air that provides a revitalized pathway to ensure that assessment processes and practices are learner-centered and collaboratively driven conversations on educational design. What a true delight to read this book!” —NATASHA A. JANKOWSKI, Former Executive Director of the National Institute for Learning Outcomes Assessment

THE AUTHORS AMY DRISCOLL is a national and international consultant on assessment, and formerly the founding director of teaching, learning, and assessment (retired) at California State University, Monterey Bay (CSUMB). NELSON GRAFF is professor and director of Communication Across the Disciplines at CSUMB, supporting faculty in teaching reading and writing in their classes. DAN SHAPIRO is currently interim associate vice president for academic programs and dean of University College and Graduate Studies at CSUMB. SWARUP WOOD is professor of chemistry and currently serves as interim director of general education and coordinator of the First Year Seminar at CSUMB.

ADVANCING ASSESSMENT FOR STUDENT SUCCESS

“The field of assessment continues to form bridges between student affairs and academic affairs as we collaboratively focus on student success. Assessment and learning outcomes ultimately speak to the student story, their journey toward personal and professional growth. Our coauthors speak from their tremendous professional experience and truly genuine heart for all students, and I, for one, think that is a beautiful thing.”

DRISCOLL / GRAFF / SHAPIRO / WOOD

“With a rich storytelling and an evidence-based approach, the authors integrate assessment, teaching, and curriculum through illustrative institutional case studies. Meaningful key aspects of the book include equity-based assessment, inclusion of student voice, and advancing reflection. The friendly, down-to-earth presentation allows the reader to apply tailored effective strategies in meeting cultural and institutional needs in advancing assessment.”

AMY DRISCOLL, NELSON GRAFF, DAN SHAPIRO, AND SWARUP WOOD

ADVANCING ASSESSMENT FOR STUDENT SUCCESS SUPPORTING LEARNING BY CREATING CONNECTIONS ACROSS ASSESSMENT, TEACHING, CURRICULUM, AND COCURRICULUM IN COLLABORATION WITH OUR COLLEAGUES AND OUR STUDENTS

22883 Quicksilver Drive Sterling, VA 20166-2019 www.Styluspub.com

“MOST IMPORTANT, THIS BOOK HUMANIZES ASSESSMENT.”—PEGGY L. MAKI

Advance Praise for Advancing Assessment for Student Success “A very helpful book filled with practical tools to immediately apply accompanied with wise advice on how to apply them. In addition to all of the useful tips and resources is some much-needed inspiration from assessment scholars and practitioners fully invested in leveraging assessment to optimize student success.”—Marilee Bresciani Ludvik, Professor and Chair, Educational Leadership and Policy Studies, University of Texas Arlington “The authors are luminaries in assessment circles, and once again bring to bear their vast experience in developing assessment principles and techniques, as well as their expertise in training others in these areas. Advancing Assessment for Student Success does, in fact, live up to its name, and higher education is the better for it.”—Rosa Belerique, Vice President, Institutional Research and Effectiveness, 2021 President of the California Association for Institutional Research (CAIR); and Sonny Calderon, Vice President, Academic Affairs, New York Film Academy “With a rich storytelling and evidence-based approach, the authors integrate assessment, teaching, and curriculum through illustrative institutional case studies. Meaningful key aspects of the book include equity-based assessment, inclusion of student voice, and advancing reflection. The friendly, downto-earth presentation allows the reader to apply tailored effective strategies in meeting cultural and institutional needs in advancing assessment. This is a must-read for universities who wish to thoughtfully and purposefully improve assessment through collaboration, emphasizing the ‘why’ and the ‘what’ of assessment.”—Laurie G. Dodge, Vice Chancellor of Institutional Assessment and Planning at Brandman University; Founding Chair of the Board of Directors for C-BEN “The field of assessment continues to form bridges between student affairs and academic affairs as we collaboratively focus on student success. Assessment highlights the stories, needs and successes of our most vulnerable student populations while providing evidence and guidance for all of our diversity, equity and inclusion efforts within the higher educational landscape. Driscoll, Wood, Shapiro, and Graff masterfully weave the oft-missing humanity of assessment, into their text. Assessment and learning outcomes ultimately speak to the student story, their journey toward personal and professional growth. Our coauthors speak from their tremendous professional experience and truly genuine heart for all students, and I, for one, think

Driscoll et al_Advancing Assessment for Student Success.indb 1

03-06-2021 07:17:21 PM

that is a beautiful thing.”—Kevin Thomas Grant, Director of Student Affairs Assessment and Research; Part-Time Faculty, Higher Education Counseling/ Student Affairs, California Polytechnic University “Whether a faculty member, academic administrator, staff, or student; whether new to assessment or someone who has been involved in assessment for years—whatever role you might play within an institution of higher education, this book is a breath of fresh air that provides a revitalized pathway to ensure that assessment processes and practices are learner-centered and collaboratively driven conversations on educational design. What a true delight to read this book! There is something in this book for everyone thanks to the authors providing examples, strategies, processes, practices, and reflections on how to take the work of fostering student success through learning to the next level. Through rich conversations with the reader, the book mirrors and models the collaborative potential of bringing faculty, assessment, student affairs, and staff together to truly deliver on the promise of education by laying out the types of conversations that should be unfolding within our institutions. This book is a must- read, showcasing the power collaboration and conversations can have on everyday lived experiences in teaching and learning, which in turn can transform institutions into learning systems.”— Natasha A. Jankowski, Former Executive Director of the National Institute for Learning Outcomes Assessment “A stark contrast to assessment processes that are aimed primarily at demonstrating fulfillment of externally established compliance standards, the authors’ learner-centered assessment process is aimed primarily at promoting individual students’ progress toward achieving course-, program-, and institution-level learning outcomes along their educational pathways, as well as engaging students in assuming agency for their learning. Written by seasoned educators who share a commitment to integrating assessment into the processes of teaching and learning that stretch across students’ studies, authors Amy Driscoll, Swarup Wood, Dan Shapiro, and Nelson Graff provide readers principles, practices, processes, strategies, and campus scenarios and case studies that deepen and broaden a shared commitment to all learners’ success across the broad institutional system that contributes to their learning. Most important, this book achieves what, I believe, has always been our challenge: to humanize assessment—to uncover the challenges our individual students face and then develop or identify interventions, strategies, or practices to assist each student persist and achieve along the trajectory of

Driscoll et al_Advancing Assessment for Student Success.indb 2

03-06-2021 07:17:21 PM

that individual’s educational pathways. Students represented in numbers or percentages on assessment reports do not humanize them nor do quantified data or band aid solutions reflect effective educators’ efforts to promote student success. I thus also call upon accreditors and other reporting entities focused on assuring our institutions advance all students to attain equitable outcomes to read this book to inform future assessment reporting guidelines or standards that provide institutions opportunities to document the realities that underlie their ongoing efforts to prepare current and future students who reflect our national demographics and who will shape our future. This book should speak to and offer inspiration to so many in higher education, from faculty to faculty developers, student affairs professionals, and administrators.”—From the Foreword by Peggy L. Maki, Education Consultant Specializing in Assessing Student Learning “This book is a must-read for not only assessment professionals but also anyone who cares about learning, teaching, and student success. In addition to thoughtful conversations about the current assessment landscape, the book is rich with detailed, authentic examples that can be easily applied to improving assessment at multiple levels. The authors present a strong argument that the integration of assessment into learning and teaching should be done through the lens of equity, engagement, and continuous improvement. Their genuine care for student learning, extensive experiences, diverse perspectives, and approachable narrative style make this book an enlightening and enjoyable read.”—Su Swarat, Associate Vice President for Institutional Effectiveness & Accreditation Liaison Officer California State University, Fullerton “This insightful book recognizes our students as invaluable partners in the effort to ensure their success—what an intuitive yet significant offering to the assessment community and everyone who strives to improve teaching and learning! The authors share knowledge and experience generously, as mentors, with a reassuring and encouraging voice. As an important addition to the literature on assessment, this volume arrives at a moment when it’s critical that our students be heard and understood.” —Kelly Wahl, Director of Student Achievement, UCLA Division of Undergraduate Education “Driscoll and her coauthors clearly get the vital need to prepare a cadre of assessment professionals for every sector of higher education, including for community colleges. Faculty know the old saying, ‘What gets assessed gets learned.’ These authors know that how it gets assessed determines how it is learned. Accreditors will particularly appreciate these practical insights since

Driscoll et al_Advancing Assessment for Student Success.indb 3

03-06-2021 07:17:22 PM

sound, integrated assessment practices are foundational when an institution sets out to demonstrate mission accomplishment and program improvement.”—Richard Winn, President (Retired), Accrediting Commission for Community and Junior Colleges, Western Association of Schools and Colleges

Driscoll et al_Advancing Assessment for Student Success.indb 4

03-06-2021 07:17:22 PM

A D VA N C I N G A S S E S S M E N T F O R S T U D E N T S U C C E S S

Driscoll et al_Advancing Assessment for Student Success.indb 1

03-06-2021 07:17:22 PM

Driscoll et al_Advancing Assessment for Student Success.indb 2

03-06-2021 07:17:23 PM

ADVANCING ASSESSMENT FOR STUDENT SUCCESS Supporting Learning by Creating ­Connections Across Assessment, Teaching, C­urriculum, and Cocurriculum in ­Collaboration With Our Colleagues and Our ­Students

Amy Driscoll, Nelson Graff, Dan Shapiro, and Swarup Wood Foreword by Peggy L. Maki

STERLING, VIRGINIA

Driscoll et al_Advancing Assessment for Student Success.indb 3

03-06-2021 07:17:26 PM

COPYRIGHT © 2021 BY STYLUS PUBLISHING, LLC. Published by Stylus Publishing, LLC. 22883 Quicksilver Drive Sterling, Virginia 20166-2019 All rights reserved. No part of this book may be reprinted or reproduced in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, recording, and information storage and retrieval, without permission in writing from the publisher. Library of Congress Cataloging-in-Publication Data Names: Driscoll, Amy, author. | Graff, Nelson, author. | Shapiro, Dan, 1965- author. | Wood, Swarup, 1968- author. Title: Advancing assessment for student success : supporting learning by creating connections across assessment, teaching, curriculum, and cocurriculum in collaboration with our colleagues and our students / Amy Driscoll, Nelson Graff, Dan Shapiro, and Swarup Wood ; foreword by Peggy L. Maki. Description: Sterling, Virginia : Stylus Publishing, LLC., 2021. | Includes bibliographical references and index. | Summary: “This book is about student success and how to support and improve it. The authors provide rich guidance for activities ranging from everyday classroom teaching and assessment to using assessment to improve programs and entire institutions”-- Provided by publisher. Identifiers: LCCN 2021017917 (print) | LCCN 2021017918 (ebook) | ISBN 9781620368701 (cloth) | ISBN 9781620368718 (paperback) | ISBN 9781620368725 (adobe pdf ) | ISBN 9781620368732 (epub) Subjects: LCSH: Educational evaluation. | Educational tests and measurements. | Teachers--Professional relationships. | Teacher-student relationships. | Curriculum planning. | Student participation in curriculum planning. Classification: LCC LB2822.75 .D75 2021 (print) | LCC LB2822.75 (ebook) | DDC 379.1/58--dc23 LC record available at https://lccn.loc.gov/2021017917 LC ebook record available at https://lccn.loc.gov/2021017918 13-digit ISBN: 978-1-62036-870-1 (cloth) 13-digit ISBN: 978-1-62036-871-8 (paperback) 13-digit ISBN: 978-1-62036-872-5 (library networkable e-edition) 13-digit ISBN: 978-1-62036-873-2 (consumer e-edition) Printed in the United States of America All first editions printed on acid-free paper that meets the American National Standards Institute Z39-48 Standard. Bulk Purchases Quantity discounts are available for use in workshops and for staff development. Call 1-800-232-0223 First Edition, 2021

Driscoll et al_Advancing Assessment for Student Success.indb 4

03-06-2021 07:17:26 PM

We dedicate this book to our colleagues—faculty, staff, and students—whose struggles and stories have inspired and informed our thinking, and to our families, for their love, tolerance, and support. Middle-school science teacher and celebrated author Shea Serrano says, “People can sense when you’re excited about something. That’s why my writings are always a celebration of what I enjoy. And it’s contagious.” All four of us really enjoy our work in assessment and have been excited writing together about it. We hope that excitement is contagious for our readers.

Driscoll et al_Advancing Assessment for Student Success.indb 5

03-06-2021 07:17:26 PM

Driscoll et al_Advancing Assessment for Student Success.indb 6

03-06-2021 07:17:26 PM

CONTENTS

FOREWORD

ix

Peggy L. Maki PREFACE

xiii

1 ADVANCING ASSESSMENT Currents of Movement, Eddies, and New Paths1

Amy Driscoll 2

EQUITY IN ASSESSMENT Support for All Students20

Amy Driscoll 3 LEARNING OUTCOMES Engaging Students, Staff, and Faculty48

Swarup Wood 4 ALIGNED AND COHERENT ASSESSMENT, PEDAGOGY, AND CURRICULUM Connections for Student Success76

Amy Driscoll 5 UNDERSTANDING AND SUPPORTING ACHIEVEMENT Improving Assignment Prompts and Rubrics108

Nelson Graff 6 USING EVIDENCE OF STUDENT ACHIEVEMENT Advancing Student Success134

Amy Driscoll 7 ADVANCING REFLECTION Fostering Conversations That Improve Student Success161

Dan Shapiro

vii

00_DRISCOLL_FM.indd 7

08-06-2021 01:39:37 PM

viii  

contents

8 ADVANCING COMMUNICATION Sharing Stories That Improve Student Success189

Dan Shapiro

ABOUT THE AUTHORS

INDEX

Driscoll et al_Advancing Assessment for Student Success.indb 8

217 219

03-06-2021 07:17:26 PM

FOREWORD

I

t is my privilege to introduce readers to Advancing Assessment for Student Success: Supporting Learning by Creating Connections Across Assessment, Teaching, Curriculum, and Cocurriculum in Collaboration With Our Colleagues and Our Students, a collection of interdependent chapters that serves as both a framework for, and a guide to assist higher education leaders, administrators, faculty, and other professional educators contributing to student learning to refine or build a sustainable and collaboratively driven learner-centered assessment process. A stark contrast to assessment processes that are aimed primarily at demonstrating fulfillment of externally established compliance standards, the authors’ learner-centered assessment process is aimed primarily at promoting individual students’ progress toward achieving course-, program-, and institution-level learning outcomes along their educational pathways, as well as engaging students in assuming agency for their learning. Written by seasoned educators who share a commitment to integrating assessment into the processes of teaching and learning that stretch across students’ studies, authors Amy Driscoll, Dan Shapiro, Nelson Graff, and Swarup Wood provide readers with principles, practices, processes, strategies, campus scenarios, and case studies that deepen and broaden a shared commitment to all learners’ success across the broad institutional system that contributes to their learning. Identifying initially, as well as throughout the chapters, specific individuals whose work in the fields of assessment and teaching and learning informed authors’ learner-centered assessment approach, authors focus on the following commitments and means to achieve them: • Getting “to know” students across an institution’s demographics to gain insight into individual students’ potential and learning challenges to promote their equitable progress toward attaining stated learning outcomes • Developing effective learning outcomes “as organizing principles to actualize broad educational goals” through engaging conversations with faculty and staff and engaging in collaboration with students to establish their shared understanding of course-, program-, and institution-level outcomes ix

Driscoll et al_Advancing Assessment for Student Success.indb 9

03-06-2021 07:17:26 PM

x  

foreword

• Developing connections for student success across an institution through aligned and coherent pedagogies, curricula, cocurricular, and assessment methods that visibly and longitudinally support students’ attainment of desired outcomes • Assuring alignment between assignment prompts and rubrics to improve student performance • Using evidence of student achievement to advance students’ success drawing on a taxonomy of practices that focus on timely analysis and use of assessment results • Advancing institutional stakeholders’ reflection on assessment results to foster conversations leading to actions to improve student learning • Sharing stories with various internal and external audiences for various purposes and occasions that document how students were able to progress toward attaining agreed upon learning outcomes Altogether, the authors’ chapters describe and illustrate how assessment can be woven into institutional culture as a continuous and shared process that enables students to achieve equitable outcomes, as well as provide evidence of the efficacy of specific educational interventions or practices that promote improvement in student learning. In contrast with externally motivated commitments to assessment most typically characterized by sampling techniques accompanied by broad statements of “corrective actions,” chapters in this book recognize that there are no “one size fits all” solutions to address the diverse needs of our students. Faculty and other educators who suddenly pivoted to teaching online in 2020 because of COVID-19 have likely learned that as well if they have taken advantage of technology-generated assessment capabilities built into learning management systems (LMSs) that document performance patterns for individual students in online courses or in online components of courses such as a discussion group—even in large enrollment courses. Generation of individualized assessment data now enables educators to develop timely and tailored strategies or interventions or altogether new practices to address individual student needs. Indeed, current research on learning released in 2018 documents the necessity of identifying and responding to individual learning needs (National Academies of Sciences, Engineering, and Medicine, 2018). Why? Because as contributing researchers document, learning processes are far more complex than even heretofore reported; in addition, what and how an individual learns is shaped by numerous external factors, including one’s culture. Thus, it is only when we are willing to identify the specific barriers an individual student faces and then assist that student to address those barriers that we will be able to narrow down or even close existing achievement gaps to promote our students’ success.

Driscoll et al_Advancing Assessment for Student Success.indb 10

03-06-2021 07:17:27 PM

foreword  

xi

Most important, this book achieves what I believe has always been our challenge: to humanize assessment—to uncover the challenges our individual students face and then develop or identify interventions, strategies, or practices to assist each student to persist and achieve along the trajectory of that individual’s educational pathways. Representing students in numbers or percentages on assessment reports does not humanize them, nor do quantified data or band aid solutions reflect effective educators’ efforts to promote student success. I also call upon accreditors and other reporting entities focused on assuring our institutions advance all students to attain equitable outcomes to read this book to inform future assessment reporting guidelines or standards that provide institutions opportunities to document the realities that underlie their ongoing efforts to prepare current and future students who reflect our national demographics and who will shape our future. This book should speak to and offer inspiration to so many in higher education, from faculty to faculty developers, student affairs professionals, and administrators. 

Reference National Academies of Sciences, Engineering, and Medicine. (2018). How people learn II: Learners, contexts, and cultures. The National Academies Press. https:// doi.org/10.17226/24783

Peggy L. Maki Education Consultant Specializing in Assessing Student Learning

Driscoll et al_Advancing Assessment for Student Success.indb 11

03-06-2021 07:17:27 PM

Driscoll et al_Advancing Assessment for Student Success.indb 12

03-06-2021 07:17:27 PM

PREFACE

Invitation and Information to Guide Our Readers

T

his book is about student success and how to support and improve it. It takes as its point of departure that we—as faculty, assessment directors, student affairs professionals, and staff—reflect together in a purposeful and informed way about how our teaching, curricula, the cocurriculum, and assessment work in concert to support and improve student learning and success. It also requires that we do so in collaboration with our colleagues and our students for the rich insights that we gain from them. We have written this book in a conversational vein, introducing you to key considerations as you embark on this work. We offer you a wealth of examples, testimonies, and stories that will help you understand how your peers are putting assessment into practice in ways that are meaningful to them, their institutions, and—most importantly—their students’ learning and success. This preface invites our readers into our thinking about assessment and the reasons for the content choices of this book. Your authors are confident and enthused about the ongoing advances made in assessment to truly promote student success. We describe those advances using our own assessment experiences and those of our colleagues and their students, along with the wisdom of recognized experts. We see current practices advancing student success with influential and meaningful evidence in so many institutions. We intend to honor and encourage our colleagues to continue along the path of their rich faculty conversations about student learning. This book provides a new conversation about assessment that advances two prominent themes: 1. It creates connections across assessment, teaching, curriculum, and cocurricular learning experiences. 2. It cultivates collaborations among our colleagues (faculty, staff, administrators) and with our students for development and use of assessment.

xiii

Driscoll et al_Advancing Assessment for Student Success.indb 13

03-06-2021 07:17:27 PM

xiv  

preface

You will find these themes throughout the book, sometimes explicitly ­highlighted and, because they are core to advancing assessment for student success, at all other times implicit. Our chapters also address criticisms of assessment with descriptions of how faculty time, expertise, and intentions successfully promote and support student learning. We consider the expansion of the role of faculty and staff to one of collaboration with students significant for the creation and use of assessment results. We highlight the value of faculty and staff conversations about assessment with each other and with students. We promote and honor their reflective decisions, notably with commitments to students central to their thinking. Our book begins with a chapter that promotes equity in the assessment practices of our faculty readers. Attention to this aspect of assessment is critical for higher education and society to truly benefit from the diversity of our student body. From there, we guide faculty through an intense understanding of learning outcomes, their purpose, and their contribution to teaching and learning. We also work to better prepare faculty for developing and using them. Our chapter describing the careful design of assignments focuses attention to their prompts and rubrics. We have learned that meaningful assignments are predictive of student success. We attend to alignment and coherence in our pedagogy, curriculum, and assessment so that students can make meaning of and connections between what they are learning. Our chapter on using ­student evidence for improvement offers support for doing so with resources, recommendations for a supportive culture, and exciting uses of student wisdom. Our book finishes with two essential and timely chapters. In one, we urge faculty to prioritize reflection within their assessment practices with powerful examples and strategies for professional development. The final chapter describes appropriate, creative, and effective approaches for communicating assessment information with attention to purpose and audience. We describe student contributions to our communications as essential to our partnerships with them for assessment. The book approaches assessment, pedagogy, and curriculum in the context of a learning systems paradigm. A learning system binds assessment together with teaching, learning, and curriculum and expands the work of supporting learning to a wide range of stakeholders, including student affairs professionals and staff. To be effective, systems should be consensus-based, aligned, learnercentered, and well communicated. At this time, we use assessment information and data to improve teaching, curriculum, and assessment itself and also to make decisions about institutional logistics, p ­ olicies, and related practices. Ultimately, we are committed to improving our use of student achievement evidence to enhance and extend student learning and maximize student success.

Driscoll et al_Advancing Assessment for Student Success.indb 14

03-06-2021 07:17:27 PM

preface

   xv

This book is intended to serve as a conversation—one that is informative and full of guidance for everyday classroom teaching. Individual faculty are our main audience, individual faculty who engage in assessment that ­connects student learning in their own courses to the broader curriculum and cocurriculum. In addition, many of those faculty take on leadership roles as coordinators and members of assessment committees to guide program planning, and we offer recommendations for their work. Our ideas provide significant preparation with which to plan professional development workshops, prepare for accreditation visits, and promote institutional improvements. We will note such recommendations and provide more direction for their use. This provision is quite intentional—we regularly meet faculty who have just been assigned to a leadership role in assessment and are on a “fast track” to develop assessment expertise and professional development approaches. But even for faculty who plan to remain focused on their own courses and students, those sections of this book will help them better understand why and how assessment leaders do what they do, which in turn will make their participation in assessment more engaging and increase their expertise in facilitating student learning. Although the chapters were individually written, they were collaboratively generated. Our collective work on assessment has been intertwined for so long that each chapter represents our collective thinking. We genuinely urge you, our reader, to try out our ideas and to stay in touch when you have examples. We are also available should you have a question or need clarity, and our emails are included in the “About the Authors” section at the back of the book. In our experiences leading workshops and campus consultations, we have been affirmed by conversations with faculty and administrators across the country and by the opportunity to engage in planning with our readers—ultimately for assessment practices that support student success. We intend to continue those relationships and will be honored to engage with you. Please note that all royalties from this book will be donated to the Provost Student Emergency Fund at California State University, Monterey Bay. Amy Driscoll Nelson Graff Dan Shapiro Swarup Wood

Driscoll et al_Advancing Assessment for Student Success.indb 15

03-06-2021 07:17:27 PM

Driscoll et al_Advancing Assessment for Student Success.indb 16

03-06-2021 07:17:27 PM

1 A D VA N C I N G A S S E S S M E N T Currents of Movement, Eddies, and New Paths Amy Driscoll

I

began this first chapter by reflecting on the experience of writing about assessment more than 10 years ago. I felt both exhilaration and anxiety. My coauthor at that time, Swarup Wood, and I felt certain that there had been significant advances in assessment, that the practices we described in 2007 are much more sophisticated today, and that many more faculty and institutions are engaged in the “best practices” we had previously described. We saw assessment truly advancing! Now with new coauthors, we are s­ etting out to push our thinking and that of our colleagues to a new level, but only if our work strengthens the movement to improve student learning. We agreed from the first that improving student success was essential to this book project.

Parker Palmer’s Wisdom Just before opening my computer I turn to a faithful source of courage, direction, and support. Parker Palmer’s Courage to Teach (1998) was ready for me. In talking about evaluation, Palmer said it is one of “those functions of education that upholds standards” (p. 138) but urges us to “use grades and other forms of distinction to emphasize learning instead of judging and to support collaboration instead of competition” (p. 138). He is bothered by the way our assessment represents power and the way we have been trained and accustomed to work in “utter autonomy” (p. 139). He envisions communities within classrooms and connecting faculty and students much as 1

Driscoll et al_Advancing Assessment for Student Success.indb 1

03-06-2021 07:17:27 PM

2  

advancing assessment for student success

we will in this book. We support his subtle reminder for us to make sure that we assess what is worth assessing. Worth measuring or assessing is an essential criterion for checking our outcomes. With apologies to Palmer, we insist on attention to our purpose for assessment; that is, we assess in order to improve our work—our teaching, our curriculum and cocurriculum, our assessment—to ultimately improve student learning, to promote student success. Through each of the chapters that follow, we will remind ourselves and you, our readers, of that commitment. It’s an intention that can get lost in faculty resistance, in budgetary decisions, in compliance with accreditation, and even within the assessment practices that we promote. It must be the raison d’être for best practices in assessment.

Who Are We? Your coauthors have committed to a specific approach to assessment— outcomes-based assessment (or learning outcomes assessment) within the bigger picture of outcomes-based education (OBE), for almost 30 years. I (Amy) have spent those years teaching/presenting and mentoring others on campuses all over the country and sometimes internationally. For the last 11 years, I have coordinated and been lead faculty of the Assessment Leadership Academy (ALA), a yearlong professional development program, so I have rich examples and ideas from participants. I will identify those participants as they describe their successes, their experiments, their wisdom. Each ALA graduate has engaged in almost a year of intense study of assessment, ­collaborated within cohorts of 34+ colleagues, and continues in a network of support with more than 400+ graduates. They can be good resources for you as they consistently guide and mentor others. We will identify them with (ALA) after their names to honor their contributions to this book. Our coauthor Swarup Wood has been working with faculty and students as he guides general education and First Year Seminar at California State University, Monterey Bay (CSUMB). He has always mentored new faculty and shares his wisdom on other campuses. Our coauthor Dan Shapiro, in his prior role as director of teaching, learning, and assessment, facilitated professional development for faculty at CSU Monterey Bay. He currently serves as interim associate vice president for academic programs and dean, University College and Graduate Studies. Dan also coordinates mentoring for the ALA and presents regionally and nationally. Our coauthor Nelson Graff is a professor and director of communication across the disciplines. He supports faculty across the institution to improve their teaching of reading and writing. Wood, Shapiro, and Graff have collaborated extensively in supporting

Driscoll et al_Advancing Assessment for Student Success.indb 2

03-06-2021 07:17:27 PM

advancing assessment  

3

CSUMB’s efforts to advance assessment and student achievement of critical thinking, information literacy, quantitative reasoning, written communication, and oral communication skills. As we said in the preface—and we hope you read it—we write our chapters independently, but our collective work on assessment has been intertwined for so long that each chapter represents our collective thinking. Swarup says that assessment is like “the gift that keeps on giving.” We have learned at least as much as we have taught—from faculty colleagues, from students, and from administrators, and from being engaged in assessment. We intend to share insights from their stories, their successes and advice, and examples and resources from their work. It is also clear from those sources that assessment is truly advancing. Our commitment to current assessment approaches that ultimately improve student learning responds to the why of this book. In addition to that commitment, we thread two themes through our chapters. The first theme is one of connections—connections integrated across assessment, teaching, curriculum, and cocurriculum. The second theme is one of collaboration—among faculty themselves, between faculty and staff, between administrators and faculty, and between faculty and students. The collaboration with students is especially important as we urge a more active role for students in assessment so that we are doing assessment with students, not to them. Assessment books (our previous publication included) typically focus primarily on assessment, but we made a clear decision to spread our focus to pedagogy and curriculum and cocurriculum.

Intentions of This Introductory Chapter I think that it is important to give you an overview of what is happening in assessment at the time of this writing, especially as efforts advance to better support student success. This first chapter will paint a big picture of the currents that characterize assessment today as a context for the chapters that follow. Those currents flow through assessment efforts, occasionally slowing or sidetracking, often encouraging creativity or swirls, and less frequently stirring resistance or getting stuck in eddies. They also effectively engage faculty and students, inform policy and practice, and improve our pedagogy and curriculum. While I describe those currents in this chapter, I will guide you, our readers, into the chapters that follow. You will get a preview of the content and thinking of those chapters. The currents begin with accreditation followed by the variation in assessment approaches. From there we submerge ourselves in the expanding current of equity in assessment, followed

Driscoll et al_Advancing Assessment for Student Success.indb 3

03-06-2021 07:17:27 PM

4  

advancing assessment for student success

by the eddy in “closing the loop” and the need for inquiry and reflection in our assessment practices. A new swirl in the currents is the idea of integrating assessment efforts with professional development in general. A strong movement characterizes the current of student assignments and related rubrics, and attention to alignment and coherence. Finally, swirling underneath the currents are the efforts directed to communication and transparency in assessment. In our thinking throughout the chapters, we address assessment as a major component of a learning system with teaching and curriculum, with academic and cocurricular faculty and other stakeholders. We insist that collaboration is essential among all the participants in education—faculty, staff, students, administrators—and with extensions to many other stakeholders. We encourage high-quality communication—with transparency and purpose—for both internal and external audiences. We observe assessment practices improving, becoming creative and effective, and providing a valuable source of information about students and how to improve their learning. Unlike some thinking about change and ideas going away, assessment is here to stay, and it is clearly advancing. Those advances lead us closer and closer to student success. I find the examples, conversations, practices, and approaches stimulating, encouraging, and definitely worth writing about. I have encountered both obstacles and successes (or eddies and swirls) to include in this overview. Both are full of insights to support your use of learning outcomes assessment, perhaps more carefully but certainly with a watchful eye to what is happening. If you are a faculty or staff member, this book is written primarily for your use in teaching courses or organizing programs. If you are an assessment director or coordinator or a faculty member new to assessment, this chapter will also provide guidance for your efforts. We intend that the chapters provide a context in which to review and evaluate your own assessment progress and to plan for continued and improved practices. I begin with the “elephant in the room,” as some would say, or the role of accreditation as the first current.

Prominence of Accreditation The powerful prominence of accreditation is reflected in the why of assessment for some institutions, in the way assessment work proceeds, and in the kind of evidence suggested by regional or disciplinary requirements. Some resistance to assessment may be due to the discouraging and ever-lingering perception that accreditation provides the primary rationale for doing assessment. Peter Ewell (2009), in his ongoing reflections on “the tensions

Driscoll et al_Advancing Assessment for Student Success.indb 4

03-06-2021 07:17:27 PM

advancing assessment  

5

around assessment” experienced by institutions of higher education, agreed and observed powerful impact from the resulting compliance approach. He described the impact of how faculty view assessment as added work only being done to satisfy accountability requirements and external pressures with no real benefits for faculty or students. That kind of campus mindset impeded efficacy, efficiency, and sustainability, according to Ewell, and even more worrisome, it jeopardized the foundational purpose of assessment to improve student learning. Another view of accreditation, however, is that many of those requirements for accountability encourage us to use ­assessment well, with quality practices that have connection to student achievement. A ­colleague once said, “I like to be compliant if it means I am using assessment practices that promote student success.” So I ask you, what kind of rationale for assessment dominates your campus thinking? Do you hear those compliance complaints from your faculty colleagues? And if so, do you feel like you are constantly rowing against that compliance thinking current? Although compliance is not what we hope for, it’s kind of like the wind that drives currents and keeps water circulating. As a result, faculty continue in their assessment efforts, often discovering the value of their work as they learn more about their students and their learning. In our interactions with faculty on different campuses, we have heard of approaches that provide a contrast to compliance thinking. It helps to connect with those individuals who “get it” and are enthusiastically assessing student learning and excitedly describing insights about their teaching—or even better, about student success. When we find those faculty, we need ways to make their efforts public—feature them on a website, or at a professional development session, or with a poster gathering. Find ways to make the ­success of your peers contagious.

Listening to Faculty I personally have witnessed a difference in how faculty talk about the why of assessment. Ten years ago, if I asked a faculty group why they were implementing outcomes-based assessment, there was immediate discomfort in the room. I either heard a soft-spoken faculty say, “We have to do it for our accreditors,” or some version of that reason stated with annoyance. I won’t pretend that I don’t hear that answer anymore, but I hear it less, and I hear other reasons, compelling reasons, from faculty who have experience with learning outcomes assessment: “When I use outcomes for my courses, I am more intentional about how I teach and what I teach.”

Driscoll et al_Advancing Assessment for Student Success.indb 5

03-06-2021 07:17:27 PM

6  

advancing assessment for student success

“Outcomes really help me with planning, with decisions about what to include in my course.” “I like being able to provide students with specific intentions for their learning. And then I take responsibility for supporting those intentions.” “We have learned so much about our program, about its strengths and weaknesses since we specified and assessed the outcomes we have for students.” It helps us when we hear our colleagues express those rationales even amid accreditation compliance sentiments. I urge you if you are able to ask your students about how outcomes-based assessment affects them, and make their comments public.

Listen to Responses to Faculty If you are an assessment director, I urge you to acknowledge the culture of resistance or compliance if it exists and provide space for faculty and administrators to express those sentiments. I’ve learned through experience that they need to be expressed or they are carried around in people’s heads and will interfere with the actual work. Some faculty resistance flows along with the accreditation current as additional rationale. The accreditation concerns are there along with other rationales for resistance. Be sure to listen well and respectfully. Do acknowledge them as legitimate and understandable before you propose other possibilities: “Yes, you are right, Ellen, you have been assessing ever since you started teaching. You will probably be able to use some of your well-designed assignments.” “I agree, this does add more work to your life. And I know how much time you give to committees and other responsibilities. We will work to be as efficient as possible.” “You are definitely entitled to that feeling about compliance—it doesn’t feel good to have requirements coming from external sources.” Once faculty concerns and resistance are affirmed as legitimate, they are more likely to try out some of the assessment approaches. I encourage those of you with assessment leadership positions to listen to those conversations with faculty about assessment in chapter 6 when I discuss using student evidence to improve our practices. Inquiry and discussions begin to build a culture that

Driscoll et al_Advancing Assessment for Student Success.indb 6

03-06-2021 07:17:27 PM

advancing assessment  

7

is most amenable to analyzing student evidence to determine where learning could be improved and how to be sure it is working. In that same chapter, I share the experiences of colleges that have developed an inquiry culture and how they advance their use of student learning evidence for improvement. In response to the accreditation and compliance currents, assessment experts and scholars believe that issues of compliance will resolve themselves if institutions maintain a focus on quality, no matter how quality is defined for a particular campus (Jankowski & Marshall, 2017). They also urge faculty to stay student-centered in their assessment practices and to continue to raise questions about student learning. The most recent national study of provosts affirms that both compliance and improvement drive assessment (Jankowski et al., 2018). Although accreditation continues as the main use of institution-level information about student learning, efforts such as program review and program improvement regularly use those results. Lessons from an impressive case study of Washington State University’s assessment progress remind us that “assessment means living with and managing two goals” (Hutchings, 2019, p. 9). Pat Hutchings also described that state as “being able to tell your story—with data—in ways that speak to a diverse set of stakeholders, while at the same time supporting meaningful and honest inquiry to explore areas that need further development” (p. 9). Hutchings reminded us that there is wide variation in the stories, the data, inquiry, and exploration.

Learning Outcomes Assessment Comes in Varied Approaches The second current, and it’s an enormous one, is that even though learning outcomes assessment is composed of a common set of practices, it takes a multitude of forms and variations. It sounds simple, but it is complex and overwhelming, and at the same time, considered advantageous for its success. The last thing we ever want to urge you as teaching faculty to use is a standardized model for implementing assessment, although there is some hunger for what Peter called “a magic bullet that will satisfy all the demands” and “leave faculty free to attend to their classrooms” (personal communication, June 4, 2016). Instead, institutions are urged to design a model that emerges from or aligns with their missions, visions, and values. Within that variation, there is agreement that student learning should be the “root of all assessment systems” and “at the forefront of every conversation and decision”—a “­common thread across institutional assessment plans” (Kolb et al., 2015, p. 82). As faculty, you will probably concur with those characteristics but will want ownership in how assessment is developed, creating further variation.

Driscoll et al_Advancing Assessment for Student Success.indb 7

03-06-2021 07:17:28 PM

8  

advancing assessment for student success

I learned early in my experience at CSU Monterey Bay that faculty must have ownership in the design and purpose of assessment if we want their commitment to using it. That characteristic holds true today as some of the same faculty participate in the current design and assessment of institutional learning outcomes. In chapter 3, Wood describes the variety of ways faculty derive learning outcomes as well as the diverse sources of those outcomes. He provides advice for us to use in the design of outcomes and adds creative examples in both academic and student affairs. With mission influences, faculty ownership, and different institutional cultures, approaches to outcomes-based assessment must be varied, could not possibly be a “one model that fits all.” Before I move on, I want to extend this current of wide variation in approaches to assessment with a sensitive concern. I think that there is a noticeable amount of assessment being done poorly and frequently with assignments that are not aligned with the learning outcomes. I bump into that situation fairly regularly. It is understandable in the big picture—learning to teach is not often part of doctoral programs and thus is rare for most faculty. Clearly many faculty have not been prepared to design and use assessment practices well before joining an institution of higher education. I have all my degrees in education and I will admit that I was not prepared to assess learning well. Early in my career, I coauthored a book on teaching strategies and when it came time to write the final chapters on assessment, neither my coauthor nor I wanted to write those chapters, feeling unprepared for authoring them. I lost the coin toss and began an intense study of assessment. There are also situations in which faculty must work with learning outcomes they had no voice in developing. There’s currently an emphasis on consensus with program learning outcomes among faculty in a department. It provides an opportunity for faculty who designed the outcomes to discuss rationales and for new faculty to come to some common meanings of their intentions. To achieve both ownership and consensus, I suggest that you identify locations in your institutional or departmental plan for assessment where faculty voice will strengthen and ensure ownership. Be clear that your input to a departmental assessment plan will be valued and significant. The wide variation I described as this current has been addressed at a national level in a process called tuning. Tuning is a means of encouraging quality in our education by defining essential learning within specific disciplines through collaboration. Faculty from multiple institutions come together to articulate core competencies and/or learning outcomes and then map career pathways (Jankowski & Marshall, 2017). One of the benefits of this process is a shared description of what students understand and are able to do when they finish a specific disciplinary degree program.

Driscoll et al_Advancing Assessment for Student Success.indb 8

03-06-2021 07:17:28 PM

advancing assessment  

9

It provides comparability for student transfers and explicitly equips students to transition to contexts after graduation. You can find tuning projects completed by the American Historical Association (https://www.historians .org/teaching-and-learning/tuning-the-history-discipline) and the National Communication Association (https://www.natcom.org/LOC/). In the meantime, I urge you to take advantage of this flow or current of variation by visiting other campuses for observation or reviewing their websites and discussing assessment models to get new ideas. Or we encourage you to access the NILOA (National Institute of Learning Outcomes Assessment) website for cases of good practices. I intentionally mention that resource now because if you haven’t joined NILOA, this is the time. It’s free—just sign up—and every month you will receive updates in the form of research reports, occasional papers, awards, conference schedules, and collections of assignments, assessments, and so on. It keeps me updated on the newest information and useful readings to inform my practice and I’ve heard similar valuing from others. In addition, if you are an assessment director, consider assembling a representative committee of colleagues—send them to workshops, conferences, and other events for learning about assessment—so that you are surrounded by expertise, support, and different perspectives. And you will want to attend some of those professional development sessions yourself so that you have language and understandings in common with your colleagues. Many of those sessions are currently devoted to a powerful current of both interest and commitment, a current of equity in assessment.

Equity in Assessment This current is one that is immersed in huge higher education waves and the search for increased diversity in higher education. Equity in assessment is a major focus of my leadership work and, like many of my colleagues, I’ve been searching for answers, for strategies, and for new thinking about assessment that will achieve equity. That search is infused in all aspects of higher education, in coursework, in programs, in student affairs, and in institutional culture. For chapter 2, I’ve adapted the work of Geneva Gay on pedagogy that achieves equity in classrooms with diverse student populations. I’ve also analyzed our current teaching and assessment practices for the dynamics of power and privilege that influence our effectiveness and equity in student success. I will provide strategies to ensure student agency in assessment—one approach to achieve equity in our practices. My intent is to encourage you to engage in inquiry, to study, to listen to students more intensely, to learn as

Driscoll et al_Advancing Assessment for Student Success.indb 9

03-06-2021 07:17:28 PM

10  

advancing assessment for student success

much as possible about their differences, and to engage them in important decision-making about assessment, pedagogy, and curriculum as much as possible. You will need to become a learner along with your students and they will teach you well. Speaking of the need for learning and engagement, there is a phrase, “closing the loop,” which means using student evidence for improvement, and it is the next major current in assessment today.

Use of Assessment Evidence: A Struggle This is a current filled with concern and surging, a slow-moving current even though we strive to use student evidence to improve student achievement. It is also a current that varies in the ocean of higher education. In chapter 6 I describe the kind of institutional culture that encourages that use of student evidence, along with the features that discourage that use. I provide practical examples and strategies that will engage you and your students in the search for how to improve student achievement. There are multiple reasons for the disappointing pace of use, but Jessica Jonson at the University of Nebraska– Lincoln and her colleagues have some different thinking about that current. As a motivation for their research, they join us with a powerful expression of the status quo of use: “A fundamental goal of student learning outcomes assessment in higher education is to use student learning evidence in decision making to improve educational programs. Such use of assessment findings, however, is atypical” (Jonson et al., 2014, p. 22). With that situation in mind, Jonson and her coauthors have suggested that the often-neglected use of assessment evidence may be due to a narrow conception of what use is. They pose that “when definitions of use are too narrow, we may miss actual transformative or slowly evolving ways that result from some of our assessment efforts” (p. 18). Following Trudy Banta’s (2002) lead, they have proposed a model of influence from the evaluation field—“a model that can be useful to faculty, administrators, and the bodies that accredit postsecondary institutions when determining the implications of assessment evidence for improving educational practices” (Jonson et al., 2014, p. 19). I want to explore the idea of influence more with examples from the research that Jonson and her colleagues conducted. There are two main influences in their study—the influence of student achievement data and the influence from participation in assessment processes. From there, they investigated the effects of those influences. As I perused the study and the description of the effects, I had memories of examples of each influence from our early work at CSU Monterey Bay. I also remembered Wood’s interview studies of faculty following some of

Driscoll et al_Advancing Assessment for Student Success.indb 10

03-06-2021 07:17:28 PM

advancing assessment  

11

our assessment processes (Driscoll & Wood, 2007). You may find these ideas connected to the reflection approaches in chapter 7. When you look at the effects, you will probably have some examples from your work with colleagues or students, or from your own thinking. Instrumental effects often took the form of faculty deciding to make a change in their practice: “I can see how using a rubric will help my students.” “I’m going to be very conscientious about writing my assignment directions from now on.” Cognitive effects took the form of new understandings, new ways of thinking or processing new information. I remember hearing their ideas: “I am becoming so much more learner-centered now that I study these assignments from the student’s standpoint.” “I realize that I am not very clear about the outcomes that are really important in this program.” “I now understand that my pedagogy is communicating a priority to my students that is not what I intended.” Affective effects include emotion, tendencies, and dispositions: “We have to continue doing this work—even though it takes time—I am so excited about what I am learning.” And finally, affirmation effects are a confirmation of existing or new practice: “Those alignment grids really opened my eyes to how I teach to the learning outcomes and mostly I’m satisfied with the way they are represented in my class time.” The researchers (Jonson et al., 2014) proceeded to examine whether those effects led to pedagogical transformation or professional development possibilities. At this point, I must express enormous gratitude to researchers Jonson, Guetterman, and Thompson for their brilliant scholarship, careful study, and powerful insights. I decided to include their work here because I believe that they offer a way to understand how faculty use student achievement evidence. In chapter 6, you will find numerous approaches to encourage

Driscoll et al_Advancing Assessment for Student Success.indb 11

03-06-2021 07:17:28 PM

12  

advancing assessment for student success

that use, but for now, think about the idea of influence as a path to follow when pursuing the use of student evidence to improve your practices. You will also find examples and a connection to a related study in Dan’s chapter 7 on reflection. I’m enthused about the influence idea, and I encourage you to get the complete article by Jonson et al. and organize a reading/discussion group. There is so much to talk about, and I expect your colleagues will contribute many rich examples. Listen well, record the comments, encourage their use of the categories of effects, and build scholarship around the thinking about influences. Here are some questions to guide your leadership: •• Are there differences in the effects described by faculty in different disciplines? •• Do the descriptions of effects vary among experienced and new faculty? •• What kind of effects do you hear most often? Least often? •• Do certain kinds of effects align with certain kinds of student learning evidence (written, oral, projects, presentations, exams)? I feel certain that you can expand that inquiry and that your observations will enrich the possibilities for such study. It’s an opportunity to contribute to the literature describing the impact of learning outcomes assessment. Urging you to expand that inquiry leads me to the next current in assessment—a real swirl in the need for more inquiry and reflection.

Need for More Inquiry and Reflection Shapiro’s chapter 7 addresses this need and guides our thinking about why and how. He describes how reflection goes hand in hand with assessment practices for increased understanding and connections to our teaching. You will find practical strategies for use in promoting reflection as an integral part of your own and of others’ assessment efforts. Reflection and a spirit of inquiry don’t often accompany our assessment practices for varied reasons. In our earlier book, we described the faculty learning communities that were an integral part of our initial assessment efforts. Those communities really supported reflective conversations about assessment. CSU Monterey Bay continues that tradition, and Dan will share examples of the practice. Pratt Institute in New York currently funds and coordinates a series of faculty learning communities to “make meaning” of assessment practices (Lewis, 2017–2018). The communities are directed to finding commonalities in their pedagogical and assessment practices and learning from each other as

Driscoll et al_Advancing Assessment for Student Success.indb 12

03-06-2021 07:17:28 PM

advancing assessment  

13

an alternative to siloes. One community decided to study critiques, or “crits,” a mainstay assessment in the arts and in related fields. Their study agenda began with literature on critique (Buster & Crawford, 2010) and included •• •• •• •• ••

producing a comprehensive definition of critiques, studying scaffolding of critique, checking the differences in critiques in different disciplines, studying the timing and specificity of critiques, and recording, documenting, and mapping questions, comments, answers, and “applause” in critiques.

Their conversations were powerful in terms of analyzing and understanding evidence of student achievement. Most of the communities conducted literature searches as a first step, acknowledging that they felt unprepared and not very knowledgeable about the particular practice of focus. Most recently, the Western Senior College and University Commission supported 12 “communities of practice” with funding from Lumina to study and strengthen current assessment practices. Check their website (http:/www.wscuc.org/ content/wscuc-community-practice-projects) for ideas. Before moving on with this encouragement of reflection, I want to elaborate on a significant factor that contributes to the lack of reflection. I visit many campuses where faculty are truly worn, exhausted by the requirements, and wary of movements that change often. And I often feel that they are entitled to resistance. I observe a practice in which programs have adopted a model of addressing one or two program outcomes each year—requiring intense design, implementation, data gathering, analysis within that year— and then move on to another set of outcomes the next year. The intention to use data to improve the programs gets lost in that swift movement through a set of program outcomes—or in the lack of real time to reflect on what students are learning or not learning. That kind of scheduling for assessment, often to prepare for accreditors’ visits or to satisfy an institutional assessment plan, does not support reflection. I propose that institutions must build appropriate blocks of time into the schedule for assessment— deliberate blocks in which faculty have time and energy for that kind of thinking—blocks of time to make changes for improvement and then assess their effectiveness. Dan will add additional encouragement in chapter 7 on reflection. In chapter 6, I discuss this scheduling issue as well as describe the importance of collaboration with students for improvement. I also provide a taxonomy of possibilities for making changes/improvements in our teaching, curriculum, and even assessment—a time-saver as well as a guide for the improvement process.

Driscoll et al_Advancing Assessment for Student Success.indb 13

03-06-2021 07:17:28 PM

14  

advancing assessment for student success

I do want to share one example of “making meaning” that is unique and was most effective for Phillips Graduate Institute some time ago. One year, when all programs focused on outcomes related to ethics, the faculty invited students to join them in analyzing their data. They approached the task with the intent to really listen and “make meaning” of the data. I attended the session, and there were such insightful explanations by students about the results. I’ve encountered a number of institutions that make interpretation of data the consistent focus of faculty gatherings with students included. In those meetings, faculty learn from students, from the process, and from each other. A constant reminder to me and my assessment colleagues is that many faculty members have not been prepared for the major responsibilities of this assessment role. It’s a powerful subcurrent; it slows regularly, not talked about or acknowledged, but it must guide our assessment advances and related professional development. We join next with a current that is becoming a powerful force in advancing assessment and supporting faculty in that advancement.

Turning to Professional Development for Assessment Like many of your colleagues, you may have learned to teach from your graduate school models and the assessment approaches they experienced. As I said previously, I came into the faculty role with minimal understanding of assessment. A recent and emerging current that goes hand in hand with the lack of preparation for assessment is one that is encouraging us to blend assessment preparation with the contributions of professional development centers. Those new possibilities for collaboration can integrate the efforts of professional development centers and assessment directors—bringing them both “in from the margins to more essential, central and valued functions” (Schroeder & Associates, 2011, p. 20). With institutional commitment to student success, we need more integration in our approaches to improve student achievement. With the intent of our book to integrate teaching, curriculum, and assessment, the connections between assessment and professional development are a natural fit. The blending is truly a direction for the collaboration and campus culture we encourage in our chapters. A recent Change article (Kinzie et al., 2019) described the integration as “one in which it is difficult to discern the distinction between assessment and professional development” (p. 50). That’s the case at CSU Monterey Bay’s Center for Teaching, Learning, and Assessment (TLA). Dan and I both served as director of the center, and he will share the TLA culture—one of reflection and communication—in his chapter 7.

Driscoll et al_Advancing Assessment for Student Success.indb 14

03-06-2021 07:17:28 PM

advancing assessment  

15

Even the recent national survey of provosts (Jankowski et al., 2018) affirms the need for faculty professional development to support faculty use of assessment results. The study’s authors pointed out that provosts are most interested in finding ways to help faculty and staff develop the “attitudes and tools to produce actionable results along with the skill set to use results to improve learning” (p. 13). They reported that provosts have shifted from just focusing on engaging faculty in assessment toward providing professional development to improve faculty expertise in conducting assessment and using assessment results. One of the major areas of professional development is the design of assessments/assignments, a task most faculty have learned independently. When provosts were asked about assessment approaches with the most value for improving student learning, their most frequent response was classroombased performance assessments and assignments (Jankowski et al., 2018). Their responses affirm the critical importance of well-designed assignments, aligned with learning outcomes and pedagogy.

Student Assignments and Aligned Rubrics The design of student assignments used to be “a private practice but it has become a collaborative (faculty and faculty, faculty and students) process yielding more authentic/actionable forms of assessment” (Hutchings et al., 2018, p. 13). That shift has promoted both national efforts, such as NILOA’s “assignment library,” and campus efforts, with tool kits of resources for support as well as institutional workshops and faculty discussions. Along with the design of assignments is the development of a rubric that is aligned with the assignment. Nelson focuses chapter 5 on a fine point for assessment design—prompts for students. He describes a campus-wide process for designing assessments with those prompts and related rubrics and shares examples and insights from the work. His work illustrates how the design process can engage faculty in conversations about planning, sharing, and refining assignments. A very important potential of that process is the creation of a culture of inquiry and reflection, thus a good fit with the CSU Monterey Bay campus. An important consideration in that process of design is alignment between learning outcomes and the actual assignment and ­curricular coherence achieved in those practices.

Alignment and Coherence: Expanding Their Presence An essential current, one that developed early in the assessment picture, is the concept of alignment. It has been with us for a long time without much discussion, kind of like still water for a time but like currents, it has begun

Driscoll et al_Advancing Assessment for Student Success.indb 15

03-06-2021 07:17:28 PM

16  

advancing assessment for student success

advancing recently to provide more depth of information. In c­hapter 4, I write about that development and provide examples of its potential. Alignment is about the connections between your learning outcomes and your pedagogy, curriculum, and assessment. In chapter 3, Swarup’s descriptions and examples may refresh your thinking about learning outcomes and their sources. Later in chapter 4, I introduce the concept of coherence along with strategies for making connections across our curriculum, pedagogy, and assessment. I describe what learning is like for students when connections aren’t evident and provide practical strategies for spotlighting those connections. Additionally, in chapter 4 you will find exciting student involvement in alignment and insightful collaborations for coherence. Many of our efforts in alignment and coherence demand clear communication and transparency with each other and with students as we communicate assessment results. Communication and transparency have become significant for making sure that “evidence of student learning is meaningful, useful, and consequential” to multiple audiences (Jankowski & Cain, 2015, p. 201). It is a current that has become more expansive and demanding of our attention.

Communication and Transparency We have begun to use varied approaches to communicate assessment—websites, reports, presentations, publications, newsletters, and forms of social media—to inform multiple audiences. Jankowski and Cain (2015) described a number of purposes for our communication: “provide context, inform decisions, share and disclose information, answer questions, respond to needs, convince others, create change, and market our programs and institution” (pp. 201–211). I could go on because there are so many uses and audiences and data/stories. Both internal and external communication with transparency are prominent responsibilities for higher education, especially when it comes to assessment. Shapiro prepares you for this ever-expanding work in chapter 8 with examples and strategies.

Riding the Currents of Assessment in Higher Education As I move to a brief section of “big picture” advice to send you into those chapters about specific assessment practices, I wonder which of the currents I’ve described you have experienced or witnessed. Have you been able to move through or use those possibilities? I hope and urge you to discuss them with your colleagues or administrative staff. I also urge you to collaboratively review some of the ongoing practices in assessment which have taught us well.

Driscoll et al_Advancing Assessment for Student Success.indb 16

03-06-2021 07:17:28 PM

advancing assessment  

17

What We Have Known About Our Assessment Practices In her “Characteristics of Effective Outcomes Assessment,” Trudy (2002) described the need for •• •• •• ••

involvement of many stakeholders; sufficient time for development; knowledgeable and effective leadership; faculty development to prepare individuals to design and implement assessment and use the findings; and •• an environment that is receptive, supportive, enabling, and rich with continuous communication. (pp. 262–263) That’s a lot, but Trudy’s characteristics are connected with the currents in today’s assessment we have just discussed. Often, however, assessment is developed and implemented in situations that lack those qualities. When those qualities are missing, we struggle to use assessment to improve student learning. Review all of Trudy’s characteristics—they are very much needed today just as they were in 2002. In your leadership role, maintain high-quality professional development for yourself, your faculty, and your administrators. A favorite assessment director of mine at Concordia University attends all of the WASC Senior College and University Commission–sponsored workshops that are offered, and each time she comes with a provost, a dean, and even her president. She is so wise because they leave those workshops with assessment understandings and can then provide the support she needs to do her job (Deborah Lee [ALA], personal communication, October 20, 2018). At the same time that Banta was describing those characteristics, Ewell (2002) was optimistically summarizing his thinking about assessment with encouragement: “Assessment will gradually become an integral part of each faculty member’s reflective practice . . . and faculty will increasingly collaborate in this work, reflecting their growing assumption of collective responsibility for student learning” (p. 25). I was glad to see the word collaborate in his thinking. Earlier in the chapter I described collaborative arrangements— faculty learning communities and communities of practice—and I reiterate the importance of involving others in your assessment practices, especially students. If you are in a leadership role, do make that a priority. Both Banta’s and Ewell’s thinking were published in 2002 and their guidance was “right on”—we just need to listen and use their wisdom to shape our practice. Our hope in this book is to provide enough guidance and examples of assessment practices that faculty, staff, students, and administrators are able to follow

Driscoll et al_Advancing Assessment for Student Success.indb 17

03-06-2021 07:17:28 PM

18  

advancing assessment for student success

that early advice and experience the kind of satisfaction that reflective practice can provide. You will notice that those early themes are continuing or are transformed with the advances in assessment.

Honoring Our Mentors In 2007 when Swarup and I wrote our first assessment book, there were two national assessment front-runners guiding our efforts at CSU Monterey Bay and all over the country—Banta and Ewell. Banta filled our bookshelves with volumes of wisdom, especially encouraging us to think of the “big picture” of assessment and to work as scholars. I can still remember her urging us to “take baby steps” in our early assessment efforts. Ewell recorded the history and paths to where we are today as curriculum for our learning (Ewell, 2002). His voice has been and continues to be prominent in major assessment achievements. As for my coauthors and our peers, we were experimenting, struggling, making mistakes, questioning, occasionally innovating, and consistently working to convince faculty of the importance and value of assessment. We were not necessarily efficient, didn’t always write good outcomes, were just beginning to have a common language, and made early attempts at the contents of rubrics while not yet using the word. In contrast, today, we are surrounded and supported by assessment peers with impressive expertise, models of institutional assessment, assessment authentically embedded in courses and programs, and consistently advancing practices. We are grateful to our peers and their work as we write this book so you will find tributes at the end of some of our chapters. Without them, this book would be less vibrant, rich, or inspirational, and would be confined to our voices. They support the themes of this work—collaboration with each other and with students, and placing assessment in a learning system of teaching, learning, curriculum, and cocurriculum. We expect that their contributions will expand the learning potential of this work and we are grateful.

References Banta, T. W. (2002). Characteristics of effective outcomes assessment. In T. W. Banta & Associates (Eds.), Building a scholarship of assessment (pp. 262–263). Jossey-Bass. Buster, K., & Crawford, P. (2010). The critique handbook: The art student’s sourcebook and survival guide (2nd ed.). Prentice Hall. Driscoll, A., & Wood, S. (2007). Outcomes-based assessment for learner-centered ­education: A faculty introduction. Stylus.

Driscoll et al_Advancing Assessment for Student Success.indb 18

03-06-2021 07:17:29 PM

advancing assessment  

19

Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. In T. W. Banta & Associates (Eds.), Building a scholarship of assessment (pp. 3–25). Jossey-Bass. Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension (Occasional Paper No. 1). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Hutchings, P. (2019, February). Washington State University: Building institutional capacity for ongoing improvement. University of Illinois and Indiana University, National Institute for Learning Outcomes. Hutchings, P., Jankowski, N., & Baker, G. (2018). Fertile ground: The movement to build more effective assignments. Change: The Magazine of Higher Learning, 50(6), 13–19. https://doi.org/10.1080/00091383.2018.1540816 Jankowski, N., & Cain, T. R. (2015). From compliance reporting to effective communication: Assessment and transparency. In G. Kuh, S. Ikenberry, N. Jankowski, T. R. Cain, P. Ewell, P. Hutchings, & J. Kinzie (Eds.), Using evidence of student learning to improve higher education (pp. 201–219). Jossey-Bass and NILOA. Jankowski, N., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Stylus. Jankowski, N., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018, January). Assessment that matters: Trending toward practices that document authentic learning. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Jonson, J. I., Guetterman, T., & Thompson, R. J. (2014, Summer). An integrated model of influence: Use of assessment data in higher education. Research & Practice in Assessment, 9, 18–30. https://www.rpajournal.com/dev/wp-content/ uploads/2014/06/A1.pdf Kinzie, J., Landy, K., Sorcinelli, M. D., & Hutchings, P. (2019). Better together: How faculty development and assessment can join forces to improve student learning. Change: The Magazine of Higher Learning, 51(5), 46–54. https://doi.org /10.1080/00091383.2019.1652076 Kolb, M., Cargile, S., Wood, J., Ebrahimi, N., Priddy, L., & Dodge, L. (2015). The urgency of now: Equity and excellence. Rowman & Littlefield. Lewis, H. (2017–2018). Teaching and learning at Pratt: Assessment for learning. Pratt Institute. Palmer, P. (1998). The courage to teach: Exploring the inner landscape of a teacher’s life. Jossey-Bass. Schroeder, C. M., & Associates. (2011). Coming in from the margins: Faculty development’s emerging organizing developing role in institutional change. Stylus.

Driscoll et al_Advancing Assessment for Student Success.indb 19

03-06-2021 07:17:29 PM

2 EQUIT Y IN ASSESSMENT Support for All Students Amy Driscoll

M

y reflection about this chapter’s focus is multifaceted and begins early in my teaching career and ends (although not finished) in a recent experience with a colleague. From there I describe the challenges of my author role as I prepare to write this chapter. Many years ago, I studied the thinking of constructivism very intensely and decided to revise all my courses so that my students could “make meaning” of the course curriculum. I spent a summer designing elaborate learning activities that would enable my students to have ownership of their learning, to create that learning, and to engage in inquiry and problem-solving to achieve learning. Partway through my first semester of teaching this way, a quiet young man approached me at the end of class and asked to speak with me. It appeared that he wanted to talk with me confidentially so we met in the classroom after everyone else left. Noting that we were alone, he came closer and quietly asked, “Could you please lecture just once in a while?” He said that he appreciated what I was doing in class, but that for some topics, he could learn more by studying his notes from a lecture. Slightly surprised, I responded that I would try to integrate minilectures into the class. I asked, “Please let me know how they are working for you and your learning,” and he agreed. I still think of that student as I plan courses, workshops, and other teaching responsibilities. My response to his memory is that I try to teach with varied pedagogy—role plays, group discussions, minilectures, reflections, lots of inquiry, and extensive examples and nonexamples. What I realized more recently is that I am planning pedagogy while attending primarily to curriculum, schedule, and myself and not tailoring the pedagogical choices or assessment to the students very well. I’ve also learned that we often tailor our pedagogy and assessment to our own learning preferences without realizing it. 20

Driscoll et al_Advancing Assessment for Student Success.indb 20

03-06-2021 07:17:29 PM

equity in assessment  

21

More recently, I listened to a respected and admired colleague who talked about the “oppression of assessment.” Initially his title bothered me because I couldn’t imagine how the work I do in assessment could be oppressive, but it didn’t take long to acknowledge that influence of assessment. My colleague, Yves Labissiere, defined oppression as being limited, kept from, tied up, named, being silenced, and confined from belonging. He described the messages of our assessment practices as statements to students, our learners, that communicate to them (Y. Labissiere, personal communication, August 6, 2019): “You don’t fit here in the university.” “Your responses don’t count as what is valued in this exam.” “You have to think differently to be successful here.” “Your ideas aren’t fitting the responses we are looking for.” Yves’s commentary sobered my thinking to the extent that I find myself ­planning so carefully to avoid those messages, but there is much work to do and much more learning about students to come close to equity in our assessment practices.

My Challenges and Intentions Before I can begin to write this chapter, I must first acknowledge to my readers that I am writing about equity in assessment as a White professional woman. I have experienced inequity as a first-generation student, but my experiences of discomfort don’t begin to compare to those of students who have been marginalized because of their skin color, national origin, language background, or religion, or students of poverty, or students with responsibilities for children or senior parents. I do, however, understand and have expertise associated with assessment and work consistently to develop and encourage practices that are student-centered and designed to ultimately improve student success. I consider myself capable of taking those practices and adjusting or editing or expanding or eliminating them to provide equitable opportunities for students, to assess their learning in ways that honor the ways they learn. To do so, I will take our standard approaches to assessment, critique their current use, and provide alternatives that will move us in the direction of equity in assessment. I don’t pretend that I can begin to truly address and/or resolve all of higher education’s issues of equity. My hope is that consistent small changes in our current assessment practices can

Driscoll et al_Advancing Assessment for Student Success.indb 21

03-06-2021 07:17:29 PM

22  

advancing assessment for student success

begin to reframe the experiences students have with us in higher education. My recommendations are intended to ensure that all students have a voice in their learning. When it comes to both pedagogy and assessment, my thinking is directed to the standard that all students have the support they need to succeed, and that all students receive feedback that guides, encourages, and honors their efforts. I am fortunate to be writing this chapter at a time when there is significant attention directed to equity in our assessment efforts. Both the National Institute for Learning Outcomes Assessment (NILOA) and the Association of American Colleges and Universities (AAC&U) are consistently publishing articles and reports that are informative, authentically critical of current practices, and rich with practices that work toward equity. I was also able to interview several experts who agreed to serve as resources for you, our readers. A set of readings and other resources for your use will be provided at the end of this chapter to extend and expand your understanding of this chapter’s intentions. I describe those intentions in more detail in following paragraphs. I begin this chapter urging you to get to know your students, really know them as individuals, and I offer approaches to achieve this. Next, I adapt the thinking and pedagogical philosophy of Geneva Gay, who promotes the practices of caring teachers. From there, I must face standard assessment practices and recommend how to adjust or elaborate them for equity in assessment. I come to terms with the dynamics of power in the relationships between faculty and students—describing how it works in classrooms and how to share the power over classroom decisions. Privilege is addressed briefly because we need to be aware of it when we are making course decisions. Linda Suskie’s (2002) thoughts on fairness of assessment are especially relevant for culturally responsive practices, and I weave her ideas throughout this chapter. I fi ­ nish by describing how you can be an advocate for equity in assessment among your colleagues. Yes, it is a full chapter, and I will just scratch the surface, but I know that you and your students and your colleagues can take it from there. Back to my first condition—I insist that we must get to know our students—not an easy beginning with the size of populations in our institutions, our programs, or even in our courses. Overall this chapter will be packed with the simplest beginnings for achieving equity in assessment for your courses.

Starting With Students: Who Are They and How Do They Learn? Before beginning, I urge our readers to consider the literature describing the demographics of our college and university learners. As a start, I heed Peggy

Driscoll et al_Advancing Assessment for Student Success.indb 22

03-06-2021 07:17:29 PM

equity in assessment  

23

Maki’s (2017) reminder that “graduating measurably larger percentages of historically underrepresented students remains our current challenge. They represent our immediate and future educated citizenry, voices of our democracy, and a major source of our workforce” (p. 14). If that information is new to you, Maki’s (2017) beginning chapters in her recent book Real-Time Student Assessment will educate you with respect to the diversity of our classrooms. It may be a way to prepare and motivate you to pursue the changes that this chapter will promote. When I ask faculty questions like “Who are our students and how do they learn?” the first response is that we can get student information from the university’s institutional research (IR) office, and that’s probably true. It’s important to at least know what our student population is like as a whole— race and ethnicity, socioeconomic status, first-generation status, and other possibilities. When such data are available, we may investigate how our different groups of students are succeeding—their graduation rate, their pace (how many courses taken each semester), their GPAs, their use of resources, and their feedback about the institution—again from the IR office. To get to a granular level for use in your programs and in your courses, it takes sincere and sensitive, consistent approaches to learn about our students. As students select a major or initiate a program of study, you can use carefully designed surveys or interviews or orientation groups to begin establishing relationships with them. Unfortunately, there are major programs with hundreds of students, and many of them move through to graduation without our getting to know them. With those numbers, we have to rely on the “big picture” of our students and do our best to teach and assess in ways that provide equity in our courses. Portland State University (PSU) begins to connect with students using a lengthy survey tool for first-year students. The questions are intended to “start a conversation” with new arrivals, who will begin in one of the university’s “First Year Inquiry” cohorts. The tool goes beyond basic student demographics to information that provides a picture of student life. It is lengthy, 30 pages, unfortunately not appropriate for inclusion in this chapter. Students are asked about their education plans and reasons for going to college. They can choose from a list of concerns they have for this first year, including finances, childcare (and dependent care), academic performance, transportation, and housing. The questions probe sources of help/support, confidence, activities of interest, and financial management. When you finish reading the survey items, you feel like you would know students well from their responses and would have focus for ongoing conversation with the respondents. Yves Labissiere (personal communication, August 6, 2019) noted they share the data with the students as soon as possible, with the

Driscoll et al_Advancing Assessment for Student Success.indb 23

03-06-2021 07:17:29 PM

24  

advancing assessment for student success

expectation students will realize that there are others like them. They will see that others have parents who did not attend college or that others are working 20 hours a week while attending school. At an urban campus like PSU, there will be many similarities among the students. Yves has also recommended tailoring a survey to reflect both your institution and your student population. The PSU survey is a good fit for the campus and the students who choose to be there. It is also important to determine the purpose of a survey or other tools and to be clear about your intended use of the responses. Back to the granular level—and with more manageable numbers in our programs or in our courses—I recommend starting the first class with an activity or a writing task that yields information for your planning. Classrooms in which reading apprenticeship is practiced often begin with a personal reading history (Schoenbach et al., 2012) in which learners share their literacy experiences and reflect on them in light of themselves as readers, writers, and learners. In case you are not familiar with reading apprenticeship programs, they provide a format for supporting students’ development of advanced and disciplinary literacy skills as part of the Strategic Literary Initiative out of WestEd. When I taught undergraduates, I asked students to describe a class that they looked forward to, were happy to attend, and in which they felt successful. I scribed their ideas on the board or ideally on flip chart paper for future use. There was a lot of agreement as well as unique qualities to the responses from students. I then asked them to describe a class that they dreaded, hated to attend, and felt less than successful on completion. For this one, their ideas poured out quickly and with vigor. At some point, it became a class discussion and continued on its own. Later, I asked them to complete a card with their recommendations for our class. I asked them to describe what would help them be successful, would support their learning, and would engage them in the course. Those cards provided very specific information for my planning and for adjustments I needed to make. And, of course, I needed to be sure to attend to those recommendations in my pedagogy and assessment. If you have recorded students’ ideas on chart paper or in a PowerPoint slide, I recommend bringing it to class halfway through the course, displaying it, and asking, “Which of these ideas have you experienced so far in this course?” or “Which ideas would describe this course thus far?” You and the students will learn together as the class is assessed and you will have the opportunity to revise in the weeks ahead. You can do this with the whole class or, for more individual responses, have students check off the ideas already experienced and cross out what hasn’t happened. It is also an opportunity to

Driscoll et al_Advancing Assessment for Student Success.indb 24

03-06-2021 07:17:29 PM

equity in assessment  

25

check in on how students perceive what is happening in class. The activity is ideal for courses in which many students are new or inexperienced. Over the years, I also experimented with providing my students with a list of pedagogies possible in my course and asked them to check off the strategies that will work well for their learning. I usually had to define some of the approaches but thought it was a good orientation to teaching. I also learned that I needed to identify pedagogies as I used them in class. Sometimes I would ask, “What kind of teaching did you experience in class today?” Or I would label my activity: “This was a demonstration today—it showed you the process rather than just talking about it.” And ask, “How did the demonstration affect your learning?” And continue with “What else would support your learning or help you learn more or remember more?” I valued the comments from students and observed them learning from each other in these discussions. I also learned that students were occasionally unaware of the specific pedagogy or learning activity I was using. You may think that you have been using a lot of examples and students don’t choose it from the list. The conversation that will follow this discrepancy will be extremely ­revealing—more information about your students. If nothing else, every time you provide an example, you may say something like “This is an example of an argument with good support.” You probably noted that the strategies for getting to know my students focus on teaching and learning preferences rather than on student differences. With that focus, I will be achieving multiple outcomes: students developing personal metacognitive awareness, students becoming aware of multiple teaching and learning approaches, and me gaining information about what is most effective for the group and for individuals. Their conversations about learning can initiate relationships between them as they find preferences in common or realize that they are not the only ones who struggle with lecture sessions or note-taking. Those conversations and your intention to adapt to students’ preferences are a foundation for a caring community.

Building a Caring Community: Foundations From the Start In addition to studying the teaching/learning preferences of my students, I also used an opening strategy with my learners that can begin to build a sense of belonging for them. It has worked well even with classes of up to 100 students. At the end of the first class, I asked students not to leave class until they had found a partner, and that was their first assignment. The responsibilities of each partner included checking on their partner if

Driscoll et al_Advancing Assessment for Student Success.indb 25

03-06-2021 07:17:29 PM

26  

advancing assessment for student success

they were absent, to collect handouts, to communicate assignment information, and to welcome them back to class. Some students chose friends, of course, but for those students who were new or shy or isolated, it was a relief to exchange emails or phone numbers with a partner and to know your presence or absence would be noticed. This may sound silly, but I gave the directions very carefully and was clear how important it would be to the success of the class. I noted that students typically took it seriously, with few exceptions. Mostly, students took care of each other with the partner assignment, and we had the beginning of a community. The assignment planted seeds for students to feel like they belonged to the class group. We know that a sense of belonging is associated with academic motivation, success, and persistence. We have also learned that developing a sense of belonging can be challenging for students from minority social groups. A solid support system, friendships, and social acceptance by peers contribute to a sense of belonging, especially when it is accompanied by relationships with ­caring professors and mentors (Vaccaro & Newman, 2016). I want to acknowledge my colleague Ken Peterson, who first demonstrated this strategy with 100 ­students as I observed the interactions and relationships built with this simple approach. I will move to more caring strategies from the work of Geneva Gay but want to insert the contributions of student affairs and the cocurriculum to a caring community before continuing.

Contributions to Caring Community by Student Affairs Professionals First, I must acknowledge the expertise of student affairs professionals, which is often underappreciated by those on the academic side. I have always known they have much to teach those of us in the academic world about student development, learning, and assets from studies that don’t often appear in many of our doctoral programs. In years of partnering with those professionals, I have learned about the curricular intentions of student affairs programs with their potential for contributing to student academic learning and to building student communities. Whether it be dorm life or advising or student leadership or first-year orientation, student voice and experience are the foundation for such programs. My colleague Kevin Grant reminded us that “we brag about admitting the best and brightest of our state and yet we make all their decisions for them” (personal communication, August 13, 2019). Student decision-making and participating in related committees, planning events, designing codes of behavior, and improving advising practices are powerful learning experiences. They also lead to different levels of community and a sense of belonging to the institutional community. In chapter 4, I describe how the connections between student affairs and academic affairs curriculum and pedagogy can contribute to a more coherent college experience.

Driscoll et al_Advancing Assessment for Student Success.indb 26

03-06-2021 07:17:29 PM

equity in assessment  

27

Some years ago, I met a student affairs professional who was in charge of all the student campus employment in a comprehensive student center. He had designed multiple levels of employment with specific learning and work experiences for each level, certificates of achievement for each level, and even some reading designed for each level. Students started at a very basic level of cleaning, arranging, and taking apart room set-ups and, at the highest level, were able to coordinate events with all the related responsibilities. Students at that final level had the skills to be in charge of an entire event and often were able to secure external employment with very good salaries. With this program, students developed confidence, a sense of pride in their employment and in their institutional facilities, and an important role in the community. They also expanded their leadership capacity, their communication approaches, their problem-solving of real problems, and critical thinking for important decisions. It was an enormous contrast to the usual employment on campuses, which can vary from engaging to boring and may often be accompanied by embarrassment for some students. I visualize supportive arrangements between academic and student affairs for the construction of a caring community for students. I want to continue with that possibility of caring in the pedagogical and assessment practices of both institutional worlds.

From Community Building to Caring Classrooms Geneva Gay (2000) was so clear about the kind of classroom community she advocated—one of caring. In her work Culturally Responsive Teaching, she said “The heart of our educational process is the interaction between teachers and students” (p. 46). She specifically urged us to build relationships that are “characterized by patience, persistence, facilitation, validation, and empowerment” for our learners (p. 47). Take some time to determine how well those qualities fit who you are and how capable you are to achieve those qualities. Those behaviors can be integrated into your pedagogy, assessment, and feedback. Parker Palmer (1998) talked about “good teachers” possessing a capacity for connectedness that sounds similar to the qualities of Gay’s caring teachers. Parker said that those teachers are “able to weave a complex web of connections among themselves, their subjects, and their students so that students can learn to weave a world for themselves” (p. 11). When Palmer interviewed students about good teachers, some of them chose to describe “bad” teachers instead, because they had something in common—“they distance themselves from the subject they are teaching, and in the process, from their student” (p. 11). Gay described “the caring faculty as one who demonstrates concern for students’ emotional, physical, economic, and interpersonal conditions” (p. 47). You need to know

Driscoll et al_Advancing Assessment for Student Success.indb 27

03-06-2021 07:17:30 PM

28  

advancing assessment for student success

your learners to be able to demonstrate that concern. Starting class with “How many of you had enough sleep last night?” or “How many of you are working more than 20 hours a week?” or “How many of you have young children at home?” or “How many of you had breakfast this morning?” demonstrates interest and an opportunity to express caring, just by asking and being interested. As I was writing about caring, my daughter sent me a piece by Anthony Abraham Jack (2019), a professor of education at Harvard, describing his undergraduate days at Amherst. He was on full scholarship but starved during the holidays when peers went home and he lacked funds to travel. What we know is that 40% of students in higher education are currently “food insecure.” I know that this is not about assessment, but it is very difficult, maybe impossible, to learn and demonstrate learning if you are starving. Fortunately, many institutions have addressed the need with a number of creative approaches. As a caring teacher, you want to know about such resources and other programs to be ready to support students who have situations that interfere with their learning. Professor Jack brought to light an additional interference with his learning. As the son who left home, it was his responsibility to send money to his family and to address the ongoing needs for a prescription, for turning on the electricity, even for taking care of a mortgage payment. I had not thought of this responsibility as part of a college student’s life. After hearing his tale, I realized that I barely begin to understand the stress and isolation that defines college life for students from different cultures, from poverty, from dangerous neighborhoods, and from a life so different from what is experienced on a campus. It is making me think hard about what I recommend—a context for assessment with caring remains at the top of the list. The partner activity I described earlier in the chapter spreads initial caring through the group, but it is critical that ongoing caring is focused on the academic growth of learners. Students who have witnessed caring teachers noted that they “were tenacious in their efforts to make what is being taught more understandable to us” (Gay, 2000, p. 49). Occasionally people are uncomfortable with the caring notion and refuse being what they translate as a “touchy feely” teacher, so I want to bring in a caring and tenacious teacher.

Listen to This Caring and Tenacious Teacher I can assure you that as a caring teacher we can learn to make sure that our directions, our teaching approaches, our feedback on assignments, and our reteaching and assessing can be threaded with caring in ways that are ­comfortable for most of us. Listen to a caring teacher giving directions:

Driscoll et al_Advancing Assessment for Student Success.indb 28

03-06-2021 07:17:30 PM

equity in assessment  

29

I want to acknowledge that the directions for the final assignment are quite elaborate and I want to be sure that they are clear and work well for all of you. I know that the projects you will produce will be excellent so I’m going to spend some time on those directions. Please get into your groups of three (established groups from the start of the course) and have the assignment directions in front of you. Take notes if that would be helpful. We will work through the four steps one at a time. I will talk through each step and then ask you to work in your groups to be sure that everyone in the group feels comfortable and clear about that step before moving on. When we finish, conduct a group check on each of the four steps. Continue clarifying if anyone in your group needs more information.

Not only is this teacher caring about students’ success with their assignment but she is demonstrating fairness in her assessment practices. Linda Suskie (2002) described it as “helping students learn how to do the assessment work” (p. 5 ). My assignments for student projects can run three single-spaced pages and I also distribute copies of good projects from past classes. This may seem like overkill, but the quality of my student’s work is far higher than when I provided less support. (pp. 5–6)

We achieve fairness by spending sufficient time on directions for assignments to the extent that we are fairly certain that there is an equitable understanding of what to do. Nelson expands on that equitable understanding in his discussions of assignment prompts in chapter 5. Another aspect of assignments or other forms of assessment is their scheduling. Hopefully we have abandoned the practice of relying solely on a midterm and a final assessment (sorry if I offend you). That kind of schedule doesn’t give our students or ourselves enough ongoing information to reteach or revise or make changes to improve learning. The best assessment is iterative, ongoing, and supplies data that urge immediate support for our students. That’s the kind of “real time” assessment that Maki (2017) urged in order to respond to student needs in the “present tense” (p. 91), not after they have graduated or left the institution. As Maki noted, we have the opportunity to step in when students are struggling and eliminate the possibility of their dropping out or not passing our courses. In addition to the value of “real time assessment,” spreading assessment across a semester or quarter provides an opportunity to scaffold more complex or elaborate learning assessment. Such assessment is typically more helpful to our learners than waiting until the end to determine achievement of an understanding or skill—or leaving feedback on a paper that no one picks up or opens or reads.

Driscoll et al_Advancing Assessment for Student Success.indb 29

03-06-2021 07:17:30 PM

30  

advancing assessment for student success

With both assignments and curriculum content being taught, students must have opportunities to express confusion about content or a lack of understanding or clarity about an assignment. Those opportunities must be provided with respect and a lack of discomfort for the students needing more information. In fact, you can take this idea further and help students experience their difficulties and confusions as moments to celebrate because they are harbingers of learning, a disposition central to the reading apprenticeship approach mentioned previously. Ending a class session with “Any questions?” is fairly ineffective, as the person most in need of clarity can’t typically phrase a question or will be uncomfortable about asking. My coauthor Nelson thinks that students see that question as a way for teachers to move on rather than to assist understanding. Instead, a caring teacher asks, “Would you like repetition or another explanation or an example of what we just studied?” Or “Raise your hand if you would like to hear a different explanation of that concept.” Or “Turn to your partner and explain the ideas we just discussed to each other.” Nelson asks, “Tell me what is still or most confusing to you.” He makes sure that students hear a genuine desire to find out and explore what students don’t understand. This is also a place where students can be coteachers, because we have learned that their explanations are often just what their peers need. Students regularly achieve a clarity with their language that is different from what we express. What I hope you take from my adaptation of Gay’s caring teacher behavior is that such teachers do everything possible to pave the way for student success. Along with their student affairs colleagues, they learn as much as possible about their students and communicate how important they are. In collaboration with student affairs, they are familiar with student resources so that they can respond when they learn about student needs. They go to great lengths to make sure curriculum and assignments are clear for all students. When difficulties and confusions arise—as they always will—they help students develop their own strategies for engaging with them, strategies that can be transferred to other challenging tasks and contexts. As I will describe in the pages ahead, those caring teachers adapt or abandon teaching and assessment behaviors that don’t support equity. I want to summarize and expand on Gay’s (2000) culturally responsive teaching ideas with her definition and a few characteristics. Culturally responsive teaching involves “learning engagements that encourage and enable students to find their own voices, to contextualize issues in multiple cultural perspectives, to engage in more ways of knowing and thinking, and to become more active participants in shaping their own learning” (p. 35). It can be defined as using the cultural knowledge, prior experiences, frames of r­ eference, performance styles of diverse students to make

Driscoll et al_Advancing Assessment for Student Success.indb 30

03-06-2021 07:17:30 PM

equity in assessment  

31

learning encounters more relevant to and more effective for them. She says that such teaching is validating, comprehensive, multidimensional, empowering, transformative, and emancipatory. Try using those qualities to examine your own teaching or a program. They would make insightful foci for discussing teaching with your own students. Ideas of what such teaching is like from their perspectives would provide an interesting comparison with your own ideas.

Reviewing Assessment Practices for Equity A few lines back, I reminded us of the futility of writing copious and careful feedback on student work that isn’t released until grades have been posted. It is often feedback that is never seen. Some years ago, a number of faculty at CSU Monterey Bay experimented with end-of-semester approaches that really made use of feedback for assignment revision and more importantly for improved learning. Those faculty set a deadline for the assignment a week in advance of their usual deadline. On the day the work was due, students were arranged in pairs to review and provide feedback to each other. Students gained insights from critiquing the work of a peer and more insights from feedback about their work from a peer. When assignments were submitted the following week, faculty were clearly satisfied that the process had significantly improved the work (in comparison with previous years). Other faculty also set an earlier deadline, perhaps 2 weeks in advance of their usual deadline. They collected the assignments and were committed to returning them the following week with helpful feedback. Students received their work the following week and during the next class experienced tutoring, reteaching from the faculty, peer explanations, a review of examples that met expectations, and conversations with their student peers about how to improve their work. I observed one of these sessions and it was a thrilling experience to watch students caring for each other. Not surprising, the final assignments were again improved tremendously in comparison to previous year’s work. Just as valuable was the sense of community and belonging that students experienced. Now that I have described a few strategies to learn about your students and approaches to teaching, learning, and assessment that build community and communicate caring to your classes, I want to take the assessment cycle apart and analyze each process. My analysis is focused on making each process more equitable and aimed at supporting all students’ success. It is also clearly focused on students having an active role in the development and use of the processes. The analysis will be appropriate for both academic courses and programs as well as student affairs programs and services.

Driscoll et al_Advancing Assessment for Student Success.indb 31

03-06-2021 07:17:30 PM

32  

advancing assessment for student success

Analyzing the Assessment Cycle for Equity Possibilities I begin with a look at a very simplified assessment cycle to review assessment processes. Briefly study the assessment cycle (Figure 2.1) before we analyze each process in the cycle for their potential to engage students. Before I move through those processes with you, I want to pause and study a definition of culturally responsive assessment. It involves being studentfocused and calls for student involvement throughout the entire assessment process. It reminds us to involve students in the development of learning outcomes, selection or development of assessment tools or strategies, data collection or interpretation, and use of results (Montenegro & Jankowski, 2017). Figure 2.1.  The assessment cycle.

Develop Learning Outcome

Design Assignments and Rubrics

Assess Changes for Improvement

Use Evidence to Change for Improvement*

Collect Student Evidence

Review and Analyze Student Evidence

* Improvement may include outcomes, assignment, pedagogy, and curriculum.

Driscoll et al_Advancing Assessment for Student Success.indb 32

03-06-2021 07:17:59 PM

equity in assessment  

33

Such involvement really describes how students must be engaged as cocreators in assessment. As we design such involvement we are able to check for biases and to be certain that we have not “excluded important voices and perspectives” (Montenegro & Jankowski, 2020, p. 10). I encourage faculty collaboration through every step. We need both sets of voices if we are going to create assessment processes that contribute to equity. Let us go back to the assessment cycle (Figure 2.1) and our consideration of how to involve students in each of the processes.

Creating or Editing Our Learning Outcomes The first step in the cycle is, of course, developing original outcomes or revising available outcomes to fit your program or course. Be sure that you study chapter 3 and heed Swarup Wood’s advice for writing high-quality outcomes as a start. We know that poorly constructed outcomes cause problems for everyone—faculty, students, administrators, and especially those who analyze the data and try to use it. As Montenegro and Jankowski (2017) reminded us, our students must have a voice in the creation of learning outcomes if we want those outcomes to have a “cultural lens” (p. 11). You may ask, “What is a cultural lens?” Our lenses are formed by the culture in which we live. They filter the world for us and help us make decisions. Understanding how people from different cultures view and interpret the world is thinking with a cultural lens. Because we are all somewhat limited in our ability to see beyond our own cultural lenses, it will be essential to engage a diverse group of students and faculty for writing outcomes and a review of their clarity to achieve a cultural lens. What is possible is that one student may understand and prefer a particular verb and another student has no connection to that verb as it lacks meaning in her culture. The ideal is that students and faculty work together to find holistic language. Faculty collaboration will be essential to this process. They too will favor certain words over others and can model for students how to achieve agreement. It is especially important that outcomes can be well understood by both students and faculty as they lead to questions and decisions about how best to support student learning. Adelman (2015) encouraged us to consider the varied ways that students learn and the range of ways that they demonstrate their learning, as well as understanding students themselves in the formation of outcomes. He also suggested that we consistently need to define language in our outcomes. We often use discipline-specific terms or verbs from Bloom in our outcomes, and we have learned that there isn’t even agreement among faculty about their meaning (Driscoll & Wood, 2007). That’s a reminder that our outcomes have to have common meanings. Common meanings require

Driscoll et al_Advancing Assessment for Student Success.indb 33

03-06-2021 07:17:59 PM

34  

advancing assessment for student success

negotiation and a process of ongoing collaboration which are both worth the time and effort. Students in the neuroscience program at the University of California, Los Angeles reviewed their program learning outcomes after completing their coursework (Hackett et al., 2019). They commented that all of the outcomes seemed quite generic and not attached to their program. They made recommendations such as extending the outcome “effective oral and written communication” with “of research to the scientific community and to the general public.” They called one outcome confusing. They recommended that outcomes be added to prepare them to achieve “real-world application of neuroscience research” and another about the ethics of their profession. What helpful information they provided! We will meet them again in chapter 4 when we talk about alignment. In conversations about learning outcomes, our students’ learning will be enhanced when we extend those conversations to the question of why we are focusing learning around a particular outcome. Taking the lead from Winkelmes et al. (2015), we can encourage our student collaborators to ­suggest reasons for each outcome. Why do you think that achieving this outcome will help you be a better chef? Or a more skilled architect? Or a safer welder? Or an insightful biologist? I am consistently amazed every time I ask students this kind of question. Another question I have used is “Who will be affected by your learning this?” Students truly have brilliant ideas and can make significant contributions to these processes and to their common understandings. They continue to contribute when we move to designing assessments and assignments.

Design of Assessments and Assignments Once the learning outcomes have been agreed upon, they lead to assignment design and development of rubrics. Here is a creative opportunity to really achieve equity in our assessment. When students are cocreators of assignments, they have the opportunity to reveal their learning preferences and strengths and guide those assignments to support their own success. Swarup has a great example of designing assessment with his chemistry students at the end of chapter 3. At this point, I want to encourage agency for students in their assessment. In social science, agency is the capacity of individuals to act independently and to make their own choices. It has been said that no empowerment or transformation can occur without agency. If we are truly working to empower our students and to transform their learning, we must find ways to encourage agency. Another way of saying this is that students can choose

Driscoll et al_Advancing Assessment for Student Success.indb 34

03-06-2021 07:18:00 PM

equity in assessment  

35

assessments that enable them to be successful in demonstrating their learning. This kind of advice typically causes a bit of an uproar with faculty: “I can’t design individual assignments for all of my students.” And no, that is not intended, although possible. When we begin to encourage agency in assessment, we can achieve it in a variety of ways. Some faculty leave it up to the individual students to propose an assignment that would enable them to demonstrate their learning. It’s best to have some parameters and a rubric to guide their designs and your assessment of them. I have heard of positive experiences with this approach. Another strategy is to provide three or four ways to demonstrate learning, provided that the assignments can all be reviewed with the same rubric. The assignment options can represent different ways of learning—visually, written, orally, or other options. To illustrate an approach that goes beyond three or four, I have to share the options provided in a high school literature course in which my granddaughter studied. When I read her assignment sheet, I was cheering internally. The assignment included a set of choices for reporting on four novels to be read. Students had specific items to include in their novel choices and a rubric for their use and their teacher’s use. I will provide just a few of her choices for reporting to give you an idea: 1. Select five gifts for the main character and describe why those gifts would be appropriate for what is happening to the character in the novel. 2. Create a college application that a character from your book would submit to a college. On the application include the name, academic history, extracurricular activities, and work and volunteer experiences that could emerge from the main character’s life. 3. Design a book jacket to reflect the story you read. Write one paragraph explaining the choices you made for cover art, colors, organization, and fonts. What makes those choices appropriate for the story? 4. Select five current event articles that you think your main character would be interested in. Write a paragraph explaining what the character would find interesting about those articles based on the novel’s happenings. 5. Select an adult character from your book and tell the story of his or her childhood in such a way that reveals why she/he is in the novel. 6. Create a map of the settings in your book. Write a paragraph explaining the various locations on the map. 7. Design a room for the character in the story, a room that would mean a lot to that character. Explain with descriptions and examples of why you designed the room as you did. (Archibald-Betts, personal ­communication, November 30, 2019)

Driscoll et al_Advancing Assessment for Student Success.indb 35

03-06-2021 07:18:00 PM

36  

advancing assessment for student success

In addition to those choices, students could report in a PowerPoint presentation, in a written report, or in a comic version. I was thrilled that my granddaughter had such exciting choices. There were 14 choices for those students. My granddaughter chose a different assignment for each of her four novels. It’s not typical to include a high school example in our higher education assessment work, but I’m always looking for teaching and assessment examples. I occasionally find them in my travels.

Listening to Faculty On a recent flight, my seat partner was a college student at Reed College who described her favorite faculty member as someone who gave such interesting assignments. I was immediately interested but tried not to probe too much. Emily described an assignment that called for responses to readings as one of her favorite assignments because she and her peers were encouraged to try out varied responses: Your responses are an opportunity to hone in on an aspect of the reading that you found particularly interesting and to respond to it with: What is gripping? What is puzzling? What do you find yourself trying to figure out? Alternatively, what is so unspeakably boring that you can hardly stand to speak of it? Feel free to include pictures or other media. (B. Lazier, personal communication, November 3, 2019)

I finally located the Reed College faculty member (who had been on ­sabbatical), Ben Lazier, and he shared his final assignment from his Modern Humanities course: Your response can be as long or as short as you like. Only requirement is to examine in some meaningful capacity the texts and themes broached in the second half of the semester. Beyond that, you are free to introduce any texts or ideas that you think are relevant. You are free to work alone, or in groups, or as a class. You are also free to employ any format you wish: a traditional essay, a play, a dialogue, an epistle, a poem, a graphic novel, an animation, a website, a movie or music video, a song, a radio show, an interpretive dance, performance art . . . as long as you demonstrate analytical savvy, communicate in intelligible fashion, and make it epic.

I realize that I have provided some extraordinary examples, but I want them to prompt your enthusiasm for enabling agency in your assessment. I provided simpler ideas for achieving agency before I introduced the high school English example. Return to those to get started.

Driscoll et al_Advancing Assessment for Student Success.indb 36

03-06-2021 07:18:00 PM

equity in assessment  

37

I was recently introduced to a study of “meaningfulness” in The Meaningful Writing Project (Eodice et al., 2017), in which seniors were asked which of their writing assignments were meaningful. Their choices had common qualities—they offered students opportunities to engage with the instructor, their peers, and texts. They were also relevant to past experiences and passions as well as to future aspirations and identities. Those qualities can give us guides to designing all kinds of assignments. But more importantly, the researchers asked students for information about their assignments being meaningful. What a powerful process! It’s one that you can all apply in your own teaching as well as in reviewing an entire program or a capstone. It’s a powerful theme for a professional development activity or an ongoing faculty learning community. Before leaving the topic of assignments and other forms of assessment, I want to remind our readers of high-impact practices. Those practices combine validated pedagogical approaches into a single multidimensional activity that engages students over an extended period of time. I bring them to this conversation because they typically benefit all students. If you look closely at many of the practices—ePortfolios, capstones, servicelearning, and so on—you note that there are multiple opportunities for agency in each of the approaches. With those assignments, we will be developing rubrics and, again, it’s an ideal opportunity for engaging students. In terms of priority, when I consider the use of rubrics, I want them to work well for students first and for faculty next. There are definitely rubrics that are not helpful to anyone. I am talking about rubrics that provide no information about what the criteria mean or what kind of work students do to meet each criterion. Those rubrics often rely on frequency terms to distinguish the levels in the rubric—often, sometimes, seldom, and never—with no description of the behavior, or the skills, or the actions, or the product. Such rubrics leave students with no guidance about how their work should proceed or be presented. It leaves faculty with the same gap. Fortunately, Nelson provides both guidance and examples of rubric use in chapter 5 while strongly urging inquiry processes that align with rubric criteria. It’s a powerful process that can lead to both assignments and rubrics that truly support student success. As Nelson and his colleagues intend for their future efforts, student collaborators in this ongoing process will help faculty move toward edits, creations, and questions that acknowledge student differences and adjust practices accordingly. I want to share a strategy I use when I work with faculty or students. I begin by giving them some very poor-quality rubrics and some very good rubrics and ask them to eliminate the rubrics that they would not use. They select their elimination choices with much spirit and are clear in their

Driscoll et al_Advancing Assessment for Student Success.indb 37

03-06-2021 07:18:00 PM

38  

advancing assessment for student success

choices. From there, with a sense of ownership, everyone engages in editing and improving one that is left. It’s efficient and it appears to be a very satisfying experience with lots of interactions and contributions from both groups. Before extending strategies for fostering agency in our pedagogy and curriculum, I want to quote a study urging us to realize that “racial diversity matters.” At the conclusion of the study, the authors (Denson & Chang, 2009) concurred: “It is becoming increasingly clear that the quality of undergraduate education is appreciably enhanced by diversity-related efforts on colleges and universities” (p. 325). They described programmatic efforts that expose students to content about race and ethnicity. They posed that structural diversity (student body racial composition) is not enough for maximizing educational benefits without providing diversity-related activities. In some of the literature, actual programs are discussed but they are separate from our daily interactions with students. I want us to consider integrating our pedagogy and curriculum, both academic and cocurricular, with cultural diversity on an everyday basis. Here are some simple points of emphasis in considering ways to build opportunities for agency in both your pedagogy and your curriculum: •• Students as the source of important questions for the course, the ­program, and planning •• Students as the source of important knowledge for the course or program, such as their own life experiences •• Students’ interpretations and applications of course or activity materials and ideas •• Students as the source of important readings for the course, with encouragement for locating literature focused on student backgrounds, issues, family origins, history, and so on, with connections to the course curriculum •• Student choices of projects, problem-solving, and presentations that respond to current issues related to diversity within the course curriculum •• Speakers and topics that reflect student diversity and interests related to the disciplinary content of the course or to an institutional thematic event, and to student requests •• Role plays, simulations, drama, art, films, or group discussions that focus on diversity in relation to course curriculum (disciplinary content), institutional mission, celebratory events, and student interest That’s a beginning, and I feel certain that our students can add to those ideas with some of their own. What is important is providing openings, facilitating student contributions, and modeling the importance and value of those

Driscoll et al_Advancing Assessment for Student Success.indb 38

03-06-2021 07:18:00 PM

equity in assessment  

39

ideas for all students. It is critically important to connect those possibilities to forms of both curriculum and pedagogy to assure deeper meaning for the diversity content. In her blog on culturally responsive assessment, Linda Suskie (2019) reminded us that students learn best and succeed when they find relevance and value in their learning activities and when their learning is related to prior experiences, with concrete and relevant examples. I know that is common knowledge but it is a reminder for our course planning. In her study of campus climate and student engagement surveys to inform equity-based assessment, Brooks (2019) found that “connected ideas from your courses to your prior experiences and knowledge” had the strongest positive correlation with “learning something that changed the way you understand an issue or concept” for non-White students (pp. 29–30). She concluded that making connections between one’s coursework and personal experiences significantly increases opportunities to better understand the coursework. Our next step in the assessment cycle is gathering the evidence that our assessment provides and using it for improvement.

Collecting, Analyzing, and Using Evidence Moving through the assessment cycle, I follow the design of assignments and the use of rubrics to the process of actually collecting evidence and analyzing the results. Collection takes different forms depending on many factors: the assignment itself, the technology resources of the campus, student and faculty preference, and workload. The actual analysis process is enhanced and enriched by the voices of students. Elsewhere in this book, I will describe situations in which students have explained the reasons for specific data and, in all cases, their perspectives were accurate and useful for faculty. This is a time in the assessment cycle when faculty collaboration is also essential. It is an opportunity to discuss learning, pedagogy, and assessment itself with actual examples and the rich reflection approaches that Dan Shapiro will describe in chapter 7. We are guilty of conducting analysis of student learning data too late to support many of our students. We save it for a “break” or for summer so that we can look at the big picture of our programs or our institutional learning outcomes. That’s understandable in the life of faculty and their “full plates” of responsibility. However, if we have a commitment to currently enrolled students and their “equitable progress toward attaining a high quality degree,” that commitment requires “nimble, targeted, and contextbased interventions, strategies, and practices to close existing achievement

Driscoll et al_Advancing Assessment for Student Success.indb 39

03-06-2021 07:18:01 PM

40  

advancing assessment for student success

and graduation gaps” (Maki, 2017, p. 91). To close such gaps requires that we regularly disaggregate results based on student demographics, even on a small scale. Harrison and Bensimon (2007) warned that unless we assess and disaggregate the academic outcomes of our students regularly and examine the data as evidence of what is learned, “inequalities in outcome achievement will remain structurally hidden and unattended” (p. 77). Jackie Brooks (2019) insisted that “the goal of equity doesn’t mean much if data isn’t disaggregated in ways that highlight how social groups experience learning environments differently” (p. 14). Varied studies attest to the fact that although most institutions collect assessment evidence, it is not typical practice to disaggregate data by various student demographics (race/ethnicity, income level, and parental education) (Maki, 2017). Maki added to the conversation with an insistence on posting results for both programs and general education outcomes as well as institutional-level outcomes. Such postings will “fuel on-time interrogation of underperformance patterns reflected in student authentic work” (p. 91). Other kinds of data can guide course and program development, as practiced at the University of San Diego (USD). USD uses two student surveys to assess factors of diversity and inclusion in the overall institutional climate: the National Survey of Student Engagement (NSSE) and the Culturally Engaging Campus Environments (CECE) survey. When reporting out to various groups on campus, results from the surveys can be disaggregated and tied to student demographics to generate different levels of discussion. For example, results from both surveys can show how experiences, such as the sense of belonging, vary for majority and minority groups of students. Faculty within divisions and student support services can use findings to consider how their approaches might be better integrated. Items from both surveys that are related to diversity in the curriculum can be used to triangulate findings from continuing new diversity, inclusion, and social justice courses in the core curriculum or to events and programs in student affairs. One cautionary note regarding the disaggregation of data: At smaller, private institutions, disaggregation can lead to such small counts that the actual numbers of students should be referenced instead of proportions or percentages. Tracking these smaller numbers for differences in responses often leads to an intensive, qualitative, and thematic analysis when quantitative comparisons are not possible with such small counts of data (Huston, 2020). Another caution notes that “simply examining data without examining if the assessment process itself is equitable will lead to continued inequities” (Montenegro & Jankowski, 2020, p. 11). Again, the kind of reflection Dan describes in chapter 7 will enhance those interrogations with real opportunity to support struggling students, to improve their learning, and to increase achievement success.

Driscoll et al_Advancing Assessment for Student Success.indb 40

03-06-2021 07:18:01 PM

equity in assessment  

41

The last step in the use of evidence after changes have been made is one more assessment to determine whether improvement has been achieved. I try to understand why this is neglected or forgotten after our rigorous set of processes, but I have also observed the fatigue and relentless pressure to move on to another outcome and more student evidence. Making changes in our pedagogy, or our curriculum, or our assessment practices is a labor-intensive process. Clearly, I want to urge faculty curiosity to assess the impact of their changes. Blaich and Wise (2011) are certain that “faculty, staff and students are curious but in the multitasking environments in which we work, general curiosity doesn’t compete well” against our many responsibilities. Simple reporting of data has little hope of generating the kind of “data informed, continuous, improvement” that many of us hope for (p. 12). I often wonder: What if it doesn’t make any difference? What if it doesn’t improve student achievement? For the sake of students, it is critical to know if our changes are supporting their learning, their achievement of their learning outcomes. I feel certain that we want to know. If we do nothing more than ask our students about the change, we would learn something helpful. Keeping in mind that we have been prepared to be researchers and scholars, the lack of follow-up assessment is hard to accept. One of these days, we will come up with ways to make this happen consistently, and I wish I had some answers now. Continuing through the chapter, I move to the dynamic of power that is so powerful in our classrooms. I want to address it adequately, along with the differences of privilege that students bring.

Power and Privilege: Dynamics of Influence To begin, “students know from direct and ongoing personal experience how power and partnerships are expressed and practiced in classrooms in ways that often remain invisible to faculty, even as faculty set the ground rules for this relationship” (Manor et al., 2010, p. 3). The writers of this statement were students and faculty collaboratively engaged in the study of teaching and learning. They remind us of the hierarchy of uneven distribution of power between faculty and students and of the impact of lack of power on student responsibility for their learning. There are two very concerning consequences of students’ lack of power: 1. Students have a fundamental misunderstanding of learning as a process of transfer of knowledge from teacher to student rather than a process of examination and meaning making by students. 2. Students do not see peers as valuable resources for their own learning, and consequently do not value group discussion and collaborative ­learning. (Manor et al., pp. 10–11)

Driscoll et al_Advancing Assessment for Student Success.indb 41

03-06-2021 07:18:01 PM

42  

advancing assessment for student success

Those reasons alone should urge us to look for ways to share our power. Let’s begin by looking at the decisions associated with assignments: •• •• •• •• ••

Curricular content of assignments Deadlines for assignments Structure of assignments—individual, pair, or small group Format of assignments Rubrics for use in developing assignments and in reviewing and giving feedback on assignments •• The percentage of each assignment in terms of grades, passing decisions, and so on Let’s begin with curricular content. In many cases, we encourage students to select topics, themes, and questions of interest. As for format, this is an ideal opportunity for students to decide how they best demonstrate their learning. Faculty like Ben Lazier, earlier in the chapter, opened up the choices for his major assignment to almost every format possible. Then there’s deadlines and, of course, you must work with your own schedule, but you might offer something like an expanded time: “You may turn in your work anytime from October 12 to October 26.” You are probably smiling because you know that there are students who will consistently be ready on October 12 and those who will consistently wait till October 26th, and that’s alright. These are small examples but they communicate well to students. Before moving to a discussion of privilege, I want to share a stunning story of a faculty scholar, Ayesha Delpish, who decided to offer ongoing student decisions in her statistics course for first-year students (Delpish et al., 2010). Amid other decisions, she determined that turning over group membership and the pacing of learning would release some of her power. She “encouraged groups to be fluid and to shift membership at their discretion” (p. 107). The resulting fluidity allowed students to find group members with similar learning styles. She noticed that visual learners joined together and students seeking more theoretical approaches found each other. The pace of their work was different as were their questions and needs. Delpish reported that the experience allowed her to examine the connections between power and motivation. In sum, she described an “authentic collaborative environment in which she and her students all learned to adapt and develop as learners” (p. 111). The issues of power could fill an entire book or library, but I must turn to privilege before concluding this chapter. Privileged students are generally associated with access to resources and opportunities, and they typically have a solid support system, friendships, and social acceptance. We generally see privilege connected to a “sense of

Driscoll et al_Advancing Assessment for Student Success.indb 42

03-06-2021 07:18:01 PM

equity in assessment  

43

belonging.” Those with privilege and those without privilege both define belonging as “comfortable.” Minority students expand the definitions with “safety and respect.” We are currently learning about students who have fewer privileges and who are in a state of “belonging uncertainty.” There are some institutional supports to help them feel like valued members of the institutional community and you as their teacher, mentor, adviser, and guide can contribute to these supports: •• Positive relationships with caring professors and mentors •• Extracurricular activities and involvement •• Supportive campus climate that communicates commitment to diversity •• Opportunities for students to work with peers •• Opportunities to foster authenticity for all students (Who Am I workshops or self-awareness programs) •• Opportunities for students to contribute to the institution and to the community and to feel like their contribution has “mattered” •• Assessment of policies, procedures, and communications for messages of support •• A visual presence of similar others to eliminate the feeling of being “the only one” •• Intentional communication to inform students about the presence of other students “like them,” of diverse student organizations, diversity events, and student centers for specific populations On the subject of students working with peers, all of us, coauthors, have concerns about group work or partner pairings without very specific preparation of students to work together. We recommend that you consider a source on group work (in our Resource section) or a workshop to prepare well for using group work in your pedagogy. It is a powerful pedagogy with the potential to promote equity or to oppress equity, so it must be used carefully. I think that you can tell from the list that you will need to be an advocate among your peers to promote institutional change or support for current efforts. You will need to leverage the experience and expertise of professionals from all parts of campus committed to student equity and success for all students. A partnership with student affairs is essential to achieving results with your advocacy. Start small with your own pedagogy, curriculum, and assessment and share your stories. Start conversations with questions about students, about their differences, and how to maximize their success. Start a book club (see readings at the end of this chapter) or a presentation/speaker series. Remember that narratives are powerful when others are beginning to

Driscoll et al_Advancing Assessment for Student Success.indb 43

03-06-2021 07:18:02 PM

44  

advancing assessment for student success

be interested. Check in with nearby institutions, especially those campuses that are sources of your transfer students. Try to locate colleges and universities that have advanced their efforts for equity (consult the sources at the end of this chapter). You will be part of a growing community of faculty and students who are working toward equity in higher education. As I finish this chapter, I share some ethical guides from my colleague Yves Labissiere: Data and evaluation are tools, not weapons. Data and evaluation are owned by and reflect the community and stakeholders. Assessment may not harm students, teachers, staff, curriculum, pedagogy, and learning. (Labissiere, personal communication, 2019)

With the second guide, he is reminding us that we need additional stakeholders, inclusive participation, cultural responsiveness, and holistically relational approaches. I am grateful for his guidance and support for this chapter. I am also appreciative of the ongoing support, information, and advice from authors whose work enriched this chapter—Natasha Jankowski, Erick Montenegro, Linda Suskie, and, again, Peggy Maki. They lead us to assessment and pedagogy that work toward support for all students to be ­successful. As promised, resources follow this conclusion.

Resources for Sharing With Colleagues, for a Professional Development Library, for a Book Club, or for Course Readings From the Association of American Colleges & Universities (AAC&U) Association of American Colleges & Universities. (2015). Step up and lead for equity: What higher education can do to reverse our deepening divides. AAC&U. Association of American Colleges & Universities. (2018). A vision for equity: Results from AACU’s Project on Inclusive Excellence with Campus-based Strategies of Student Success. AAC&U. Witham, K., Malcolm-Piqueux, L. E., Dowd, A., & Bensimon, E. M. (2015). AAC&U’s America’s unmet promise! The imperative for equity in higher education. AAC&U.

Other Resources Cohen, E. G., & Lotan, R. A. (2014). Designing groupwork: Strategies for h­ eterogeneous classrooms (3rd ed.). Teachers’ College Press.

Driscoll et al_Advancing Assessment for Student Success.indb 44

03-06-2021 07:18:02 PM

equity in assessment  

45

Montenegro, E., & Jankowski, N. A. (2020, January). A new decade for assessment: Embedding equity into assessment praxis (Occasional Paper No. 42). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Shor, I. (2014). When students have power: Negotiating authority in a critical pedagogy. University of Chicago Press.

Resource Individuals and Programs Joe Slowensky is vice provost for institutional effectiveness and faculty affairs at Chapman University. He supervises the Offices of Faculty Affairs, Institutional Research, Diversity and Inclusion, and Accreditation and Assessment, and the Institute for Excellence in Teaching and Learning. Slowensky currently leads the Chapman Diversity Project, a 200-member campus-wide, long-term volunteer initiative designed to enhance the campus culture for diversity and inclusion. Joe is a graduate of the Assessment Leadership Academy. He can be reached at [email protected] Marguerite Bonous-Hammarth is executive director for the Office of Inclusive Excellence at the University of California, Irvine (UCI). She collaborates with faculty, staff, and students to oversee program development and responsive research and assessment for diversity, equity, and campus climate initiatives. She was previously director of assessment, research, and evaluation for the UCI Division of Student Affairs, so she has a wide scope for understanding the institution. She is a graduate of the Assessment Leadership Academy. She can be reached at mbonoush@ uci.edu Jackie Brooks is an assistant professor of sociology at California State University, Sacramento. Brooks’s scholarship addresses the experiences of marginalized groups, highlighting the intersecting forms of oppression (e.g., race, ethnicity, gender, and class inequalities) that shape their lives. She works with faculty, students, and administrators to advance equity-based assessment on campus. Currently she is working with campus climate and student engagement data to better understand how the institution’s environment influences student learning and success. Her current reader with Heidy Sarabia and Aya Kimura, Race and Ethnicity: The Sociological Mindful Approach, is available through Cognella Academic Publishing. She is a graduate of the Assessment Leadership Academy. She can be reached at jacqueline [email protected] University of Michigan launched its 5-year Strategic Plan for Diversity, Equity and Inclusion in 2016. As part of the plan, the institution established a central office for DEI with a chief diversity officer, Robert Sellers, vice provost for DEI. Within a few years, there were

Driscoll et al_Advancing Assessment for Student Success.indb 45

03-06-2021 07:18:02 PM

46  

advancing assessment for student success

50 plans from individual schools, colleges, and campus units, as well as institutional strategies for campus-wide efforts. Check their website for annual reports.

References Adelman, C. (2015). To imagine a verb: The language and syntax of learning outcomes statements. University of Illinois and Indiana University, National Institute of Learning Outcomes Assessment (NILOA). Blaich, C., & Wise, K. (2011). From gathering to using results: Lessons from the Wabash National Study (pp. 3–17). University of Illinois and Indiana University, National Institute of Learning Outcomes Assessment (NILOA). Brooks, J. (2019). Using campus climate and student engagement surveys to inform equity-based assessment. (Project/Report for the Assessment Leadership ­Academy). Assessment Leadership Academy. Brooks, J., Sarabia, H., & Kimura, A. (2021). Race and ethnicity: The sociological mindful approach. Cognella Academic Publishing. Delpish, A., Darby, A., Holmes, A., Knight-McKenna, M., Mihans, R., King, C., & Felten, P. (2010). Equalizing voices: Student-faculty partnership in course design. In C. Werder & M. M. Otis (Eds.), Engaging student voices in the study of teaching and learning (pp. 96–114). Stylus. Denson, N., & Chang, M. (2009). Racial diversity matters: The impact of diversity-related student engagement and institutional context. American ­ Educational Research Journal, 46(2), 322–353. https://doi.org/10.3102/ 0002831208323278 Driscoll, A., & Wood, S. (2007). Outcomes-based assessment for learner-centered ­education: A faculty introduction. Stylus. Eodice, M., Geller, A. E., & Lerner, N. (2017). The meaningful writing project: Learning, teaching and writing in higher education. Utah State University Press. Gay, G. (2000). Culturally responsive teaching: Theory, research, & practice. Teachers College Press. Hackett, C., Yokota, M., & Wahl, K. (2019, April 23). Curriculum maps: Ensuring intentionality and reflection in program planning efforts. [Paper presentation.] ­Academic Resource Conference, Garden Grove, CA. Harris, F., & Bensimon, E. M. (2007). The equity scorecard: A collaborative approach to assess and respond to racial/ethnic disparities in student outcomes. In S. R. Harper & L. D. Patton (Eds.), Responding to the realities of race on campus. (New Directions for Student Services, no. 120, pp. 77–84). Jossey-Bass. Huston, C. L. (2020, November). Strategies for change: Equity in our assessment practices. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Driscoll et al_Advancing Assessment for Student Success.indb 46

03-06-2021 07:18:02 PM

equity in assessment  

47

Jack, A. A. (2019, September 10). I was a low-income college student: Classes weren’t the hard part. New York Times Magazine (Education Issue). https://www.nytimes .com/interactive/2019/09/10/magazine/college-inequality.html Maki, P. L. (2017). Real-time student assessment: Meeting the imperative for improved time to degree, closing the opportunity gap, and assuring student competencies for 21st-century needs. Stylus. Manor, C., Bloch-Schulman, S., Flannery, K., & Felten, P. (2010). Foundations of student-faculty partnerships in the scholarship of teaching and learning. In C. Werder & M. M. Otis (Eds.), Engaging student voices in the study of teaching and learning (pp. 3–15). Stylus. Montenegro, E., & Jankowski, N. A. (2017, January). Equity and assessment: Moving toward culturally responsive assessment (Occasional Paper No. 29). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Montenegro, E., & Jankowski, N. (2020, January). A new decade for assessment: Embedding equity into assessment praxis (Occasional Paper No. 42). University of Illinois and Indiana University, National Institution of Learning Outcomes Assessment (NILOA). Palmer, P. (1998). The courage to teach: Exploring the inner landscape of a teacher’s life. Jossey-Bass. Schoenbach, R., Greenleaf, C., & Murphy, L. (2012). How reading apprenticeship improves disciplinary learning in secondary and college classrooms. John Wiley & Sons. Suskie, L. (2002, Spring). Fair assessment practices: Giving students equitable opportunities to demonstrate learning (Adventures in Assessment No. 14, pp. 5–10). SABES/World Education. Suskie, L. (2019, June 8). A common sense approach to assessment in higher education. https://www.lindasuskie.com/apps/blog/show/46823388-culturally-responsiveassessment Vaccaro, A., & Newman, B. (2016). Development of a sense of belonging for privileged and minoritized students: An emergent model. Journal of College Student Development, 57(8), 925–942. https://doi.org/10.1353/csd.2016.0091 Winkelmes, M., Copeland, D. E., Jorgensen, E., Sloat, A., Smedley, A., Pizor, P., & Jalene, S. (2015, May). Benefits (some unexpected) of transparently designed assignments. The National Teaching & Learning Forum, 24(4), 4–7.

Driscoll et al_Advancing Assessment for Student Success.indb 47

03-06-2021 07:18:02 PM

3 LEARNING OUTCOMES Engaging Students, Staff, and Faculty Swarup Wood

V

ery early in my teaching career I began to notice something disturbing in my classrooms—that no matter how energized I was, or how entertaining I believed my lectures were, my environmental science students looked bored. Worse than this, my experience of their boredom was demoralizing. It was 1999, and Amy Driscoll had just come to California State University, Monterey Bay (CSUMB) as director of teaching, learning, and assessment. One afternoon, mulling the challenge in my classroom over a cup, she asked, “What would it be like to give your students more of a say regarding how they should go about meeting the course outcomes? I mean, it’s a GE course right, the outcomes are now pretty clear. Seems like a great opportunity to let them lead and see what happens.” I went to my students the following week with an honest discussion regarding my observations, my concerns, and thoughts on remedies. I offered to let them take on the challenge of determining how they would meet the course learning outcomes (yes, they had to actually demonstrate that they’d met them), and that instead of lecturing, I would work to support their plans and projects. The vote to adopt the proposed change was unanimous and what followed transformed the class. The students quickly determined which environmental issues they were most interested in (and hence the ones upon which the course would focus), organized groups that would gather information on the topics, and develop presentations and/or classroom activities. I provided guidelines and criteria for the groups, for the information they gathered, for their presentations, and for exam questions they produced. What a stunning experience for all of us. The students were amazed at finding themselves in the driver’s seat. I was amazed at their phenomenal d ­ riving—at how able they were to use the learning outcomes to organize their group projects, 48

Driscoll et al_Advancing Assessment for Student Success.indb 48

03-06-2021 07:18:02 PM

learning outcomes  

49

presentations, and exam questions. It was the conversation that launched a thousand classroom experiments and a most profound experience. It showed me how much students can handle when given the wheel and the value of learning outcomes as organizing principles for curriculum and pedagogy. I think it’s fair to say that I’ve been learning from my students ever since. Learning outcomes have profoundly changed the ways I connect students to what I want them to learn. Of all of the pedagogical approaches and ways to improve teaching and learning I have read about and engaged with over the past 23 years, the use of learning outcomes has been the most impactful. This chapter is dedicated to helping our readers use learning outcomes (LOs) as organizing principles to actualize broad educational goals. As I’ve imagined our readers, perhaps brand new faculty who are thrust into the world of LOs and assessment or faculty who are overseeing program assessment, new directors of teaching, learning, and assessment centers, or perhaps faculty or administrators who are serving on an accreditation team and wanting to be as helpful as possible to the institution they’re serving, I’ve worked to fill the chapter with effective strategies, practices, and ways to get the most from your work with LOs. If you are a new faculty struggling to understand and use LOs as organizing principles for your courses, this chapter will help you not only in your work with your own students but it will also help with your contributions to departmental program review, lead faculty learning communities, and accreditation work on campus. In an effort to practice what I preach, I’ve outlined this chapter by way of LOs. I won’t conclude our time together with a high-stakes exam, but I hope that you’ll notice that I’ve endeavored to make these outcomes reasonable— that is, things that you should actually be able to do after reading the chapter. LOs for Chapter 3 LO 1: Explain the difference between learning goals and LOs and the value each contributes to assessment LO 2: Choose verbs that are “observable in external contexts and subject to judgment” (Adelmen, 2015, p. 7) LO 3: Develop effective LOs LO 4: Design engaging conversations for effective faculty development around LOs LO 5: Describe strategies for collaborating with students in the design or assessment of LOs

These LOs will guide our time together here; I hope they will serve you well as you use this chapter in making LOs central to your teaching and your campus’s educational practice.

Driscoll et al_Advancing Assessment for Student Success.indb 49

03-06-2021 07:18:02 PM

50  

advancing assessment for student success

LO 1: Explain the Difference Between Learning Goals and LOs and the Value Each Contributes to Assessment You can tell that the field of assessment is relatively young because the ­literature presents no consensus on assessment’s most basic terms. Even so, for assessment work to reach its full potential for improving teaching, creating meaningful assessment data, and enhancing student learning, you and your colleagues will need to agree on definitions of terms. •• Goals: Goals are broad overall destinations and are frequently aspirational; they lack sufficient detail to be measurable. By this definition, critical thinking, information literacy, and social justice are examples of goals. •• Learning outcomes: According to Adelmen (2015), LOs are student actions centered on operational verbs that can be observed in external contexts and are subject to judgment. By external contexts Adelmen means outside of a student’s mind, which isn’t true for several verbs commonly used in LOs. For example, for the goal of critical thinking, and using the Association of American Colleges & Universities (AAC&U) definition, we can consider the following LOs: •• Create an evidence-based argument informed by multiple perspectives •• Discuss author’s own assumptions/biases, as well as the assumptions/ biases of the authors whose evidence is used in the argument •• Describe conclusions of the argument, including the consequences and implications of those conclusions This list is not comprehensive. I’m sure we can all imagine external contexts (e.g., assignments) in which we could assess student performance for each of these three LOs. We’ll revisit the importance of defining terms in subsequent sections, but please know that because different groups are likely to have different tacit understandings of terms like critical thinking, it is helpful to make developing explicit shared understandings an early part of the process. I can tell you from experience, trying to redefine them after the fact is most challenging (Driscoll & Wood, 2007).

LO 2: Choose Between Verbs That Are “Observable in External Contexts and Subject to Judgment” (Adelmen, 2015, p. 7) In my experience, most graduate programs don’t include much curriculum on learning to learn. For our readers who are new to this area, I want to

Driscoll et al_Advancing Assessment for Student Success.indb 50

03-06-2021 07:18:03 PM

learning outcomes  

51

introduce a few essential taxonomies. First published in 1956, Bloom’s taxonomy of educational objectives (Bloom, 1956) calls our attention to six cognitive domains: knowledge, comprehension, application, analysis, synthesis, and evaluation. The domains are arranged from simple to increasingly complex and have evolved considerably, expanding subsequently into noncognitive domains. They are a great reminder of the complex nature of the scope of our work with students and they have an important role in the construction of our LOs. Assessment coordinators and faculty developers will find Bloom is a great conversation starter when helping faculty out of that pregnant pause that frequently follows “What do you want students to learn?” It will help faculty be able to visualize what they are thinking, as well as verbalize the implicit frameworks and skills they want their students to embody. Dee Fink’s taxonomy of significant learning (2003), a successor to Bloom’s work, is a holistic taxonomy comprising six domains of learning which should be used to shape learning experiences (see Box 3.1). The elements of this taxonomy are not hierarchical but iterative, and the dimensions take advantage of what we know about learning in the context of being human. In it, we can see that students’ motivations, social needs, emotional engagement in classroom settings, and so on become important as we shape classroom experiences to optimize learning in our students. Although highly deferential to Bloom’s work, Fink developed this taxonomy because of his observation that Bloom’s taxonomy failed to capture important elements of the educational enterprise. A note of caution: Both Bloom’s and Fink’s works are fabulous resources for developing curriculum, thinking holistically about appropriate pedagogies, and supporting LOs. However, explicit in their taxonomies are verbs that do not belong in outcomes. Just what kinds of verbs do belong in LOs? I am glad you asked. In To Imagine a Verb, Adelman (2015) has given surefooted guidance on the construction of LOs. He championed LOs that communicate clearly to students and stated that LOs must be centered on “operational verbs that can be observed in an external context and are subject to judgment” (p. 7). The article has a whole section in which the author articulately eliminated verbs and adjectives frequently found in LOs. Verbs such as understand, recognize, develop, relate, consider, prepare, comply, reflect, realize, anticipate, foresee, review, extend, and work don’t belong, because, as he reminded us, they don’t “produce observable behaviors or objects” and cannot be observed outside of a student’s brain. Along with his well-developed rationales for the verbs that are, and are not, part of LOs, are his 20 groups of verbs organized by area. For example, if you want to help students unpack the “cognitive activities we group under analyze” consider the following verbs: compare, contrast, differentiate,

Driscoll et al_Advancing Assessment for Student Success.indb 51

03-06-2021 07:18:03 PM

52  

advancing assessment for student success

Box 3.1 Major Categories of Dee Fink’s Taxonomy of Significant Learning •• Foundational knowledge: These are things so important that we want students to commit them to long-term memory. •• Application: What we want students to be able to do long term. Verbs such as create, analyze, and evaluate all apply. •• Integration: This area is all about making connections. Consider the kinds of things we want students to be able to connect to our course materials, such as theory from other courses, personal experience, and so on. •• Human dimension: Fink (2003) described this dimension as “two closely related components: learning about oneself and about others” (pp. 80–81). Fink encouraged use of this dimension to make our teaching more relevant to students, and thus increase their level of engagement. •• Caring: Most of us want our students to appreciate the subject of our course so much that they go on to pursue further study, so much that it influences students’ future choices. Fink encourages us to make this goal explicit. •• Learning how to learn: A complex dimension that has “three distinct forms: becoming a better student, learning how to inquire about this particular subject matter, and becoming a self-directed learner.” Engaging students in metacognitive skill-building is a natural path into this dimension (Fink, 2003, pp. 80–81). distinguish, formulate, map, match, and equate. Or for faculty developing LOs focused on “how students valuate objects, experiences, texts, productions etc.” he suggested the verbs audit, appraise, assess, evaluate, judge, and rank (Adelman, 2015, p. 18). Engaging Adelmen’s work will allow you to benefit from Bloom and Fink while still producing excellent, assessable LOs. Finally, I believe you’ll find that Adelman’s thinking works well with the SMART framework for developing LOs in the next section.

LO 3: Applying Resources for Developing Learning Outcomes What do you actually want students to learn was the question Amy approached a group of us with in 1999. As a fresh-out-of-school, newly minted faculty, I remember thinking, “Well, I’m not used to classrooms

Driscoll et al_Advancing Assessment for Student Success.indb 52

03-06-2021 07:18:03 PM

learning outcomes  

53

being organized around what faculty want students to learn.” During my undergraduate education, it seemed to me rather that the textbooks drove much of what we studied. Faculty assigned chapters or other readings and developed lectures that largely paralleled those readings. Occasionally faculty gave us study guides, but frequently the exams (three per term) seemed to spring from the hidden recesses of the faculty’s minds. I didn’t experience many attempts by those faculty to be transparent regarding what they wanted me to learn. From the slightly unsettling and uncomfortable feeling in the room, I gathered Amy had something completely different in mind.

Questions to Help Us Think About Assessment Well, what do we want students to learn? How would we know if they had learned it? What does that learning look like as it manifests in assignments and student work products? If we knew what we wanted students to learn, could we describe it at several different levels of achievement? Could we structure those descriptions in rubrics in ways that were helpful to both faculty and students? What kinds of teaching practices (pedagogies) would most help students embody the skills and abilities embedded in our LOs? Wiggins and McTighe (2005) used these kinds of questions in developing “backwards design,” which starts with the end in mind and then considers pedagogy, curricula, and how students will be assessed. Few except those with advanced degrees in education have faced such questions before leaving graduate school. For the rest of us, these questions provide entrée into developing LOs and assessment practices. They also force us to engage our disciplinary expertise in ways that few questions can. Finally, these questions begin to shed light on the chasm between knowing a lot about something, and knowing a lot about how to teach it—what Lee Shulman (1986) called “pedagogical content knowledge” (p. 9).

SMART Learning Outcomes If you’re new to developing LOs, please do yourself an enormous favor by starting with a simple, pragmatic framework; working with colleagues to develop a shared understanding of that framework; and keeping it in the forefront of your work as you develop LOs. SMART is one such framework. Here are the SMART criteria followed by questions to help you consider if the ideas you’re working with meet those criteria: •• Specific: Exactly what is it that we want here? •• Measurable: Can we measure or even see “understanding”? No, but we are all familiar with things that are measurable and with student work products to which rubrics can be applied.

Driscoll et al_Advancing Assessment for Student Success.indb 53

03-06-2021 07:18:03 PM

54  

advancing assessment for student success

•• Action-oriented: What will students do (think verb) or produce that will demonstrate their achievement of the outcome? •• Reasonable: Is the expectation commensurate with the course/degree level? •• Time-bound: When will we assess this outcome—end of the week, end of the course, end of the program? For example, let’s apply the SMART framework to the following LO: “Understand the scientific method.” Is this measurable? Not yet. Actionoriented? No. Understand is a verb but is not very helpful with respect to communicating what we want from students. Adelman would remind us that it can’t be observed in an external context. Nor, as stated, is it subject to judgment. Reasonable? Yes. Time-bound? Not yet. Perhaps rephrased into two outcomes: “By the end of the course, given a set of observations, students (a) state a hypothesis and (b) design an experiment to test that hypothesis.” This two-part LO meets all of the SMART ­criteria and uses verbs that both produce work products outside of the student’s mind and are subject to judgment. One can imagine a rubric to accompany the outcome that would detail the required components of the experiment as well as qualities of the hypothesis. Resources for developing SMART LOs abound on the Web, and, like the assignment guides that are presented in Nelson’s chapter 5, most of these sources help users interrogate their thinking by looking at their LOs through different pragmatic lenses. Please know that there are many “different” SMART frameworks, as well as entirely different frameworks available. Although approaching LOs through these kinds of lenses is necessary in developing assessment, it is perhaps even more important in using LOs to organize curricula and pedagogy, which we’ll return to later. If you are an old hand with LOs, you are probably familiar with SMART and even have a critique or two. I’ve worked with many faculty groups who felt like the outcomes resulting from using a SMART framework have been digested to the point of being rather flavorless and aspirationally lacking (though highly pragmatic). I believe a solution to this challenge could be to develop (and share/publish) goals before using the SMART framework to develop assessable LOs. Goals can be aspirational and holistic, and because we aren’t assessing them directly, they don’t have to be grounded in measurability. Assessment coordinators will find it very helpful for faculty to be able to differentiate between goals and LOs. If they can’t, your colleagues are likely to produce LOs that aren’t actually measurable. In working with faculty to develop effective LOs, don’t forget that in spite of priding ourselves on our ability to think rationally in academic

Driscoll et al_Advancing Assessment for Student Success.indb 54

03-06-2021 07:18:03 PM

learning outcomes  

55

work, how these goals and outcomes make faculty feel will be the real driver. Faculty are passionate and are rightfully reluctant to surrender our vision in efforts to develop practical, workable solutions that have the mouth feel of cardboard. SMART LOs can emerge naturally out of goals as faculty recognize assessable, practical aspects of our prized goals. Even though the “nuts and bolts” LOs may not be particularly inspiring, the goals can be, and when they are published with the LOs, the two can become an inspiring, practical whole. As I finish this section on applying resources for developing LOs, I want to call your attention to two important frameworks that have potential as resources.

Liberal Education and America’s Promise (LEAP) and the Degree Qualifications Profile (DQP) It would be hard to overestimate the importance of the roles that AAC&U’s LEAP Essential Learning Outcomes and Lumina’s Degree Qualifications Profile (DQP) have played in shaping accreditation and the last decade of the assessment movement. The two have been adopted/promulgated by many of the regional accrediting bodies from which member institutions take considerable direction. The two have much in common. They are both serious attempts to describe what students should know and be able to do as a function of their education and to make higher education more transparent (McKiernan & Birtwistle, 2010). Both highlight the importance of high expectations, of pushing institutions to organize their curricula around proficiencies or “LOs” as opposed to seat time and accumulation of credits. And both hope to help institutions better connect their teaching and curricula to the skills and knowledge the country’s college graduates will need in the future. LEAP focuses solely on undergraduate education, whereas the DQP is geared toward both undergraduate and graduate education. The LEAP Essential Learning Outcomes and the AAC&U VALUE rubrics that followed have been enormously helpful in pushing higher education’s thinking about the skills and abilities embodied in terms like quantitative reasoning, information literacy, and so forth. However, because they are not measurable, I believe AAC&U’s unfortunate use of the term Outcome in naming the LEAP Essential Learning Outcomes has had the unintended consequence of contributing much to the confusion around definitions which have mired higher education’s attempts to measure student achievement within and across institutions. The DQP has five categories of learning (similar to those named as the LEAP Essential Learning Outcomes) which are followed by genuine LOs for each category.

Driscoll et al_Advancing Assessment for Student Success.indb 55

03-06-2021 07:18:03 PM

56  

advancing assessment for student success

LO 4: Design Engaging Conversations and Effective Faculty Development Around Learning Outcomes LOs allow students to focus and prioritize their efforts (Ewell, 2016). For these efforts to increase student performance, there has to be a shared understanding between the teacher and the student regarding what those outcomes mean. Further, if student efforts are to result in better performance on general education, program, and university-level LOs, then faculty within and between departments have to develop shared understandings of what those LOs mean. This is much more easily assumed than accomplished. In addition, we can’t begin the important work of alignment without first coming to common understandings of our LOs. I appreciate the way Jankowski and Marshall (2017) defined alignment and the import of talking about alignment: “If we accept defining alignment as exploring relationships among the various elements of the learning system to support shared ends, the importance of another element of our paradigm becomes clear: The learning system is consensus-based” (p. 69). Without shared definitions, assumptions, and frameworks, there can be no consensus. Note that developing shared definitions is very challenging, and it’s easy to understand why. Until faculty have worked to develop shared definitions, the fact that we don’t have them is largely hidden. We’re all using the same words; it’s natural to assume we mean the same things (Driscoll & Wood, 2007; Jankowski & Marshall, 2017). The practices in Amy’s chapter 4 coming up next, as well as the “Using Rubrics for Building Conversations” section later in this chapter, will really help you make alignment, and developing shared understandings of what you’re working to align, a reality. When Amy and I (Driscoll & Wood, 2007) finished writing OutcomesBased Assessment for Learner-Centered Education, we had worked with faculty groups across the country. Two of the things that stood out were how frequently faculty take an assumed understanding of language for granted and, when they became aware of the assumption, how much they appreciated coming to consensus. Those faculty groups frequently discovered that without talking in depth about their goals and LOs, they lacked shared understandings of what their goals and outcomes actually meant. Those conversations were rich, engaging, and rewarding, as was watching faculty probe and engage each other’s thinking and the different meanings they assigned to the same words. We also saw that these processes and conversations about the meanings of our LOs were a great means of building community (Hutchings, 2019).

Considering Equity Recently, the nation’s public universities have become intensely interested in student success and graduation rates. They have focused on how to reshape

Driscoll et al_Advancing Assessment for Student Success.indb 56

03-06-2021 07:18:03 PM

learning outcomes  

57

their bureaucracies to help students, particularly first-generation, underrepresented minority students, navigate their way to graduation (American Association of State Colleges and Universities, 2016–2018). Increasing equity and eliminating equity gaps are important in these efforts, and this student success work can provide great entrée into conversations about LOs. In addition to helping students focus their effort toward learning (Ewell et. al., 2017), well-written LOs also can give students deep insights into faculty expectations, which would otherwise be unintentionally hidden from students. Going back to critical thinking, which is a goal in many courses, when we add well-developed LOs to that goal, a faculty’s expectations around “critical thinking” take shape; concrete, explicit expectations manifest in the LOs. Without publicly stated LOs, faculty typically hold these schema internally, even implicitly. From an equity perspective, there are strong arguments to be made for making teaching as transparent and explicit as possible, because differences in shared experience (especially experiential differences arising from race, culture, and class) are likely to make some of our students less likely to intuit what faculty intend. Explicit LOs help students place themselves within the set of course expectations (what aspects of this LO can I do, which ones can’t I do, . . . do I even know what this word means?). I would like to finish this section on equity by reminding readers that from at least one perspective, LOs and many of our other efforts to make our expectations transparent to students are part and parcel of the system that marginalizes the very students we seek to serve (Montenegro & Jankowski, 2020). According to composition scholar Asao Inoue (2019), our outcomes embody a White racial habitus (p. 16) that privileges students who come from White, middle-class backgrounds. The LO Students will write in standard edited English is one such outcome because it privileges cultural norms of White, middle-class English. Inoue preferred to frame learning in terms of goals, which he believed left room for alternative ways of knowing. Yet his main critique focuses on grading students according to White norms. Even if we don’t grade students according to evaluations of their work based on the outcomes, we can still use the outcomes to support student improvement and learning, always keeping in mind that our outcomes are invented and informed by disciplinary perspectives that are infused with the cultures in which they developed. In chapter 2, Amy encouraged you to include diverse groups of students with different cultural lenses in work sessions to design outcomes, to edit outcomes, and to explain outcomes. I’ll encourage you to engage yourself and your colleagues in conversations around the extent to which your assessment practices increase equity, or conversely, reify the equity gaps that exemplify the status quo. Assessment should be crafted to ensure support for all students. To achieve this aspiration, I think you’ll agree that equity-focused conversations are of paramount importance.

Driscoll et al_Advancing Assessment for Student Success.indb 57

03-06-2021 07:18:04 PM

58  

advancing assessment for student success

Practical Considerations For our readers who are working with faculty groups and considering the nuts and bolts of developing conversations around LOs and the assignments and rubrics that follow, here are some elements to consider. Most of these types of conversations mix faculty expertise with looking at the familiar through a new and different lens. We encourage you and your colleagues to engage in these conversations. We’ve found the following practices very helpful: •• Lead with questions—developing outcomes, assignments, and rubrics is inherently an inquiry-based, faculty-driven process. •• Create supportive environments in which faculty can use their expertise. •• Acknowledge that this is a social process—anticipate the social needs of your colleagues. •• Socialize, prepare to enjoy this work. The work is intellectually challenging and gratifying. •• Remind colleagues that community building is as important as the assessment results. •• Develop good rapport with faculty before challenging them overmuch. •• Seek to lead from behind. •• Feed their bellies and their brains—break bread together. For facilitators of this work, if you can’t pay folks, feeding them is a great way to decrease anxiety around working in areas in which they may lack expertise or feel exposed. Neurologically, people are less apprehensive when fed and less apprehensive in social situations. Breaking bread together is a great means to help create comfortable conditions. We go next to one of my favorite ways of generating excellent assessment-focused conversations—using rubrics.

Using Rubrics for Building Conversations One of the most effective and practical means we have found for engaging in these kinds of conversations is in developing rubrics and in using rubrics to assess student work. In a well-connected world, assignments are designed to give students practice with LOs and facilitate measurement of student achievement of LOs. Rubrics contribute to this by focusing different attributes of the assignment on key aspects of the LOs. Similarly, the criteria or elements of a rubric are focused on key aspects of LOs. For example, it is easy to imagine a research paper with the following LO as part of the assignment: Evaluate information sources using criteria such as expertise, authority, audience, purpose, and context. A rubric used to assess this LO must describe “expertise,

Driscoll et al_Advancing Assessment for Student Success.indb 58

03-06-2021 07:18:04 PM

learning outcomes  

59

authority, audience, purpose, and context (of the information used)” at different levels of mastery. Developing such a rubric would require considerable discourse, even for the most like-minded faculty group. Conversations in which faculty work together to develop elements of a rubric and differentiate levels of achievement are intellectually challenging and “work to make the implicit, explicit” (Jankowski & Marshall, 2017, p. 52). As such, they tend to bring out opportunities for faculty to see into and inquire about each other’s thinking. They also surface subtle (and not-sosubtle) differences in how LOs interact with implicit schema held by faculty. In addition, differences in the ways faculty understand different terms come to light in these conversations. When working with interdisciplinary groups, these discussions give faculty experience with new disciplinary frameworks. My coauthors and I understand that the kinds of conversations that arise when faculty develop new rubrics or use rubrics to collaboratively assess student work are some of the most powerful kinds of faculty development. The assignment guides that Nelson presents in chapter 5 are another great way to engage faculty in these kinds of thinking.

Procedural Knowledge Versus Content Knowledge If you’re hoping to engage in this kind of work, I strongly recommend Michael Carter’s (2007) “Ways of Knowing, Doing, and Writing in the Disciplines.” As we rub up against the challenges of developing disciplinary LOs, part of the difficulty includes being able to articulate what it takes to be able to think as a professional in a given discipline. Carter illustrated the challenge as “the difference between knowledge and knowing, that is, disciplines as repositories and delivery systems for relatively static content knowledge versus disciplines as active ways of knowing” (p. 387). What are this discipline’s core ways of knowing? How do we know what we know? What factual knowledge must students know as majors in this discipline? Carter distinguished the first two questions from the third as procedural knowledge and content knowledge, respectively. From my experience working with science faculty, the content outcomes tend to come readily, but the “ways of knowing” or procedural knowledge can be a bit trickier. This is a wonderful framework for helping a discipline’s procedural knowledge emerge from faculty conversations.

The Tuning Process For faculty developers looking for a well-established method for entry into this work, the DQP’s (n.d.) “Tuning Process” has been done on hundreds of campuses. It brings together disciplinary peers from different institutions to develop common disciplinary LOs (Adelmen et al., 2014; Jankowski

Driscoll et al_Advancing Assessment for Student Success.indb 59

03-06-2021 07:18:04 PM

60  

advancing assessment for student success

& Marshall, 2017). The process also attends to working for institutional implementation, easing transfer between institutions, and developing career pathways. Although many of the examples in this section draw on personal experience, the literature devoted to tuning is substantial; a journal devoted to tuning (Tuning Journal of Higher Education) launched in 2013. Tuning was originally developed in Europe as part of the Bologna Process and used in the United States as a means of engaging institutions in the DQP (McKiernan & Birtwistle, 2010).

Updating or Reworking Established Learning Outcomes CSUMB recently had a powerful experience of applying the SMART framework to existing LOs, and although it may be much smarter and easier to start with SMART, applying it to existing LOs can be very fruitful as well. A few of us at CSUMB had been trying to find a nonthreatening means of reworking the general education LOs to be, well, higher functioning. We began wondering aloud if the Assessment Committee might conduct a collegial review of the general education LOs through the SMART framework? Subsequently the Assessment Committee made lovely, collegial outreach to the General Education Curriculum Committee with this suggestion, noting that feedback would be framed in the form of tactful questions and recommendations. The committee set to work in small groups looking at the GE LOs through the SMART framework and applying Adelson’s (2015) “­external context” framework that I discussed earlier in the chapter. The small groups brought their suggestions to the whole Assessment Committee for vetting, which were then forwarded to the GE Committee. The committee found many issues. Some of the outcomes didn’t use verbs that could be observed in an external context (Adelman, 2015). More than a few LOs had more than one verb, and we encouraged that these be pared down. Finally, and you’ve probably run across this on your campus, sometimes GE LOs look as though they were written by faculty who thought “this course will be a student’s only opportunity to learn all that is worth knowing in my discipline”—the resulting LOs seeming rather more rigorous than a sophomore typically bargains for. The R in at least one SMART model stands for reasonable. It’s a great criterion for these kinds of situations. The GE committee didn’t accept all of the suggestions, but the dialogue that followed was thoughtful, reflective, and the vast majority of the LOs were improved. It was an example of a great, iterative, consensus-building process, and of faculty development focused on LOs that raised the level of discourse around LOs among many faculty. Wrapping up our conversations about LOs, we’ll now look at them as organizing principles for our curricula and pedagogical approaches.

Driscoll et al_Advancing Assessment for Student Success.indb 60

03-06-2021 07:18:04 PM

learning outcomes  

61

Learning Outcomes: Focal Points for Curriculum and Pedagogies If you’ve been looking for the most important section of the chapter, look no further. To approach teaching and curriculum development from an outcomes-based framework is to approach it through a destination-centered lens. Once we truly understand the destination, we are positioned to chart a course (pun and metaphor intended) to that destination. The opportunities that our LOs provide come in many layers, and I’ll work to disentangle those layers in following sections. I’ll also try to work from the most obvious layers to the more subtle ones. We’ll start by considering connections between institutional, program, and course-level LOs. From there, we’ll then dive into considering how to support LOs throughout program curricula, and, perhaps most importantly, we’ll look at how to select pedagogical approaches most appropriate to the learning we wish to create. Because my discussion of these layers connects so strongly to discussions of alignment and mapping exercises, you’ll probably begin to scheme on mapping and alignment plans as you read. In addition to Amy’s chapter 4 on alignment coming next, I strongly encourage chapter 4 of Jankowski and Marshall’s (2017) Degrees That Matter.

Institutional Learning Outcomes Note, as I’ve defined terms in this chapter, many institutions’ institutional learning outcomes (ILOs) are actually goals rather than genuine LOs. This section applies well to developing institutional goals as well as genuine LOs. If you need to develop ILOs or institutional goals but are having trouble getting started, try imagining an elevator conversation in which an acquaintance asks what is special about your institution. My guess is that for most of you, a couple of very wonderful things come to mind which resonate far beyond a few individuals. Let those be your starting points. Given that these ­resonate deeply throughout the campus community, they are probably already h ­ appening and are perhaps even well developed. A beautiful example of this occurred when the University of San Francisco finished developing a new mission statement which represented the campus, location, and culture well. There was pride and enthusiasm about the mission. When the time came to develop ILOs, a group of faculty and administrators met and dissected the mission statement into key phrases. Those phrases led to goals easily, and then to ILOs (University of San Francisco, 2014). It was a good fit when finished. The work was circulated across campus for departmental review and there was little disagreement and lots of support. When you have a moment, please take a look at their work in the next chapter (Figure 4.4); it is a wonderful example of how connected to

Driscoll et al_Advancing Assessment for Student Success.indb 61

03-06-2021 07:18:04 PM

62  

advancing assessment for student success

institutional mission ILOs can be. A grid illustrating this process can be found there as well. Mapping exercises like those described in that chapter, in which groups of faculty approach ILOs through several different lenses, would quickly yield information on where in the curricula they are present, how well they are developed, what kinds of pedagogical approaches faculty use, and what kinds of extracurricular support these already have. In chapter 6, Amy describes Jennifer Hirashiki’s (2019) project to bring together faculty from education and business to study each other’s assessment practices and student evidence, with encouraging and effective results. Jennifer paired four faculty from business and four from education for conversations about their respective assessments of two common ILOs. Participants reviewed student work in the other discipline, guided by ­master’s-level common ILOs in information literacy and quantitative reasoning. They used a related rubric from their discipline. They agreed on the strengths of individual student work and areas that needed improvement. More importantly, their process yielded insights into how to use the student evidence to improve programs. Finally, participants went beyond the initial focus to recommend course scaffolding, improved syllabus layout, and rubric design. Pat Hutchings’s (2019) case study of Washington State University speaks volumes about the shared ownership of programmatic curricula that can come from engaging faculty in ILO assessment. Over the years, WSU deployed a variety of strategies: •• Committing to long-term improvement (being intentional about ­taking on bite-sized pieces was key) •• Taking advantage of work faculty were already doing •• Working to raise the status of assessment and student learning by incorporating it into the faculty rewards system (Hutchings, 2019) A note of caution and an interesting challenge: Please don’t let requirements set out by accreditors drown the life out of these discussions. Yes, your campus might have significant work to do on critical thinking or quantitative reasoning (as most do), but letting what “must” be done be the primary driver of these discussions is a buzzkill. Additionally, engaging in the discovery process of addressing “what is special about a degree from our amazing institution and what does/should this look like in our curricula?!” is a fabulous opportunity to build community.

Program Learning Outcomes Strap on your lower back support, for this is where the heavy lifting is required. And it’s easy to understand why, right? Program learning outcomes (PLOs),

Driscoll et al_Advancing Assessment for Student Success.indb 62

03-06-2021 07:18:04 PM

learning outcomes  

63

the skills and disciplinary content that our students are to master before graduating, represent a high bar. PLOs draw on our professional expertise and work to make explicit the nuanced proficiencies our students will need to be successful in graduate school or in their careers. Developing a coherent program is much more abstract work than developing a great, recursive, and coherent course. Program LOs or goals are the major currents that run throughout degree programs, and must be fed by tributaries from many individual courses. In addition, faculty/structures have to shift their focus from individual courses to programmatic curricula. Several layers of understanding are required for these tributaries to build toward a coherent whole. Unpacking the Complexities of PLOs The skills and content areas embodied in PLOs are genuinely complex. Therefore, we’ll need to develop a thorough understanding of their components and invest energy in understanding how each part is best taught. For example, let’s take a look at the following PLOs for a psychology program: •• Use critical thinking to analyze behavior and mental processes •• Use creative thinking to examine behavior and mental processes •• Use a scientific approach to study behavior and mental processes For implementation and development throughout a psychology curriculum, the faculty will need a developed and shared understanding of what critical thinking, creative thinking, a scientific approach, and behavior and mental processes actually are in the discipline. They’ll need to describe each of these at the introductory, developing, and mastery levels. This work is not trivial. These components are multifaceted in and of themselves; assuming that these things will manifest themselves meaningfully up through a given curriculum through self-assembly is fanciful. To consider the challenge and complexity of this work, let me ask, Where is critical thinking introduced in your curriculum? Perhaps a tougher question: What attributes for critical thinking do you use—because even though we all use the same phrase, different disciplines mean very different things by it (Brookfield, 2012, pp. 27–52)? Building consensus around disciplinary definitions, as well as explicit understandings of them at introductory, developing, and mastery levels, will require considerable time and investment. And this is faculty work. For an assessment director to attempt to “do this for the faculty” is to undermine an inherently rich learning process and probably to alienate the very faculty it is meant to help. An iterative approach in which program faculty take on one PLO at a time, letting success in one build energy and pull faculty toward working on the rest, is valuable.

Driscoll et al_Advancing Assessment for Student Success.indb 63

03-06-2021 07:18:04 PM

64  

advancing assessment for student success

Exposure and Development After identifying and defining the components and understanding their attributes at different levels of mastery, programs have to identify which courses will introduce those pieces, which courses will give students opportunities to practice, and which courses will ramp up students’ levels of ­proficiency. Of course, addressing this question only works if the courses are sequenced and can build on each other. Helping transfer students who arrive halfway through some majors is another challenge altogether, and an important one to address nonetheless. I recently worked with a business department around this very issue. The process I developed is useful and easily adaptable. We started 2 weeks before we met as a group (see Figure 3.1). Faculty found this practice both gratifying and engaging. They appreciated being able to see into each other’s courses and into each other’s thinking. The faculty had a major “Aha!” moment when they saw that even though their assignments expected a “high level of analysis,” it was weakly and unevenly introduced and developed. Several groups noticed that few courses had analysis as one of their big six ideas. Those that did emphasize analysis didn’t have assignments that supported students practicing actually doing it. As we

Figure 3.1.  Mapping components of program learning outcomes to courses. Preparation: • Identify the components of each PLO • Have faculty identify the three most important themes in each of their courses ◦ For each theme gather associated assignments • Analyze the theme-associated assignments ◦ Identify solid, explicit connections from the assignments to components of the PLOs ◦ Map the results to components of each PLO (visually connecting courses to components—doing this on a long length of butcher paper might allow you to see much or all of a program) • Identify a PLO to focus on (not all of the PLOs are created equally) ◦ Choose one the faculty really want to know more about (I suggest the one at which faculty are most frustrated by the lack of student performance) • Identify courses with the most explicit connections to components of the PLOs ◦ Copy the assignments from those courses linked to one or more PLO component/s Group work (small groups): • Read through the assignments ◦ Identify which assignments worked with a given component at an introductory, developing, or mastery level ◦ Identify components that are strongly supported and at which level ◦ Identify areas that are absent or weakly addressed in the curriculum

Driscoll et al_Advancing Assessment for Student Success.indb 64

03-06-2021 07:18:05 PM

learning outcomes  

65

wrapped up the day, the faculty could see more of the road ahead and were eager to work together to better support student mastery of the PLOs. It’s important to note that sharing assignments with colleagues requires rapport and trust. If you’re just working up to that, try pulling assignments from NILOA’s assignment library (https://www.learningoutcomesassessment .org/ourwork/assignment-library/). There you’ll find examples and nonexamples for gearing faculty up for looking into their own curricula. This will allow faculty to sink their teeth into assignments without having to worry about each other’s status or the relationships in the room.

Course Learning Outcomes It’s important to note that there is considerable overlap between this section and the section on PLOs. That is, the preceding subsections and the subsections that follow all apply to both program and course LOs. The two types of outcomes differ, but the kinds of thinking around development, coherence, and pedagogy are similar in both. That said, course learning outcomes (CLOs) help give an important granularity in the priorities of a given course and in how the course will connect back to the PLOs and ILOs. Pedagogical Approaches In a day and age in which evidence around the efficacy of active pedagogies has driven a significant wedge between higher education’s tendency to conflate teaching, especially lecturing, with learning (Barr & Tagg, 1995), it is of the utmost importance to match your intended learning with appropriate pedagogy. On a similar note, it is also easy to conflate asking students to demonstrate a skill with actually teaching that skill; how many of us assign analysis versus those who actually teach it (model it, give students practice doing it, assign it, assess it)? The axiom “You don’t fatten a hog by weighing it” urges us not to confuse the end with the means. I encourage you to use mapping practices for the purpose of understanding how much opportunity students have had not only to listen to a lecture on a PLO component (which could be a great way to introduce concepts, define terms, etc.) but to determine how many opportunities they’ve had to engage that element in different ways. For example, mapping exercises should reveal how many times students have been able to work in teams, to create, to solve problems, to engage, and to produce, all in the context of a specific PLO or CLO component. Critical thinking involves examining our own assumptions (to the extent that is neurologically possible) as well as examining the assumptions of the sources we use. These are subtle skills, and it’s hard to imagine much mastery without a lot of practice throughout

Driscoll et al_Advancing Assessment for Student Success.indb 65

03-06-2021 07:18:05 PM

66  

advancing assessment for student success

.

a curriculum. These kinds of mapping activities give insight into the extent to which students have received lectures on the elements of critical thinking versus how much time they’ve spent applying those elements, or how many times students have engaged multiple choice exams versus addressing authentic problems using critical thinking skills. Most of our outcomes imply or state explicitly very specific types of skills. Listening passively to lectures without multiple opportunities to practice putting those skills into action will most certainly fall short of our student learning aspirations. Metaphors In The Art of Changing the Brain, James Zull (2002) told us that brains learn through concrete experience, that in fact, concrete experience is required to learn anything. Since the vast majority of things taught at universities are inventions of the human mind (abstract by nature), how then do we provide students concrete experience of our most cherished abstract concepts? Metaphors provide the link between abstract concepts and concrete experience, and are thus of paramount importance in our work to help students master PLOs (Lakoff & Johnson, 1999). I cut my teeth using molecular models as metaphors for atoms and molecules; the learning of organic chemistry is utterly dependent on models to give students concrete experience with a reality far too small to see. But what are the useful metaphors for teaching critical thinking, analysis, and synthesis? Annually I engage my freshmen in a workshop in which they use the elements of critical thinking as defined by the VALUE rubric to make diagrams that illustrate the relationships between and among those elements; many of the diagrams illustrate what students consider the process of critical thinking as well. It is powerful to see my students struggle to create these diagrams, and many of the resulting diagrams are powerful as well. We use some of their diagrams as we work on critical thinking throughout the course. Those diagrams are metaphors and they allow students to see process, which neurologically is a very different experience than hearing about or reading about the same process. The diagrams represent an opportunity for students to find and create connections that will support their learning, as described in chapter 4. I realize that this layer, asking what metaphors my colleagues use to connect students to abstract aspects of an LO, implies a level of intimacy that few have with each other’s teaching. How many of us have ever been asked, “What metaphors do you use to connect students with that concept?” How many have asked that of ourselves? How many of us wouldn’t be thrown by the question? Wouldn’t it be amazing to hear a student ask, “Do you have a metaphor to help me understand that variable; I’m just not quite getting it?” Trust me, there are great conversations about what metaphors your colleagues

Driscoll et al_Advancing Assessment for Student Success.indb 66

03-06-2021 07:18:05 PM

learning outcomes  

67

are using to help students connect abstract concepts to concrete experience. Nelson asked me in his review of this chapter if I didn’t think that a lot of the value with metaphors was idiosyncratic to individual faculty? Absolutely! These conversations are important, not for us to adopt each other’s metaphors (though this may be valuable), but in seeing inside each other’s thinking, in stimulating our own creativity, and by pushing each other further into our own deep explorations of teaching in our disciplines. Addressing these layers is important for at least two distinct reasons. It is a wonderful asset in developing coherent curricula, curricula whose pieces work together to build toward student mastery of our LOs. And it’s important that students experience this explicitly, that students can see our effort to make these connections and how our curricula build toward a coherent whole. There is considerable evidence that students benefit from this (Terry, 2015; Wolk, 2015) and some powerful descriptions of what happens to students if they don’t experience it. Amy takes this thinking much further in chapter 4, and so does Peggy Maki (2017) throughout her recent book, ­Real-Time Student Assessment.

LOs as Organizing Principles for Student Affairs Goals and LOs are routinely used in student affairs, not only to better connect services offered by student affairs to the broader learning missions of their institutions, but also as a means of focusing on the learning that student affairs professionals want to create (Komives & Schoper, 2006, p. 17). As you can imagine, professionals in student affairs experience the same kinds of challenges differentiating between goals and genuine LOs as those in academic affairs. That said, I believe LOs have great potential as focal points for connecting student affairs professionals to the institution’s learning goals and orienting professional development activities. In addition, the mapping processes Amy describes in chapter 4 will also serve this work. Belerique and Calderon (personal communication, December 8, 2018) worked with student affairs professionals across the 22 student ­services offices of the New York Film Academy’s Los Angeles and New York campuses. They engaged colleagues in mapping processes that connected Council for the Advancement of Standards Learning and Development Standards and ACPA/NASPA Professional Competency Areas for Student Affairs Practitioners to the academy’s ILOs. The work resulted in the development of cocurricular learning goals—author, character, action, and audience. Now, a couple of caveats. First, in their report, the authors called them LOs, but as I’ve worked to differentiate goals from LOs, these are goals. Second, as this is the New York Film Academy, these goals are

Driscoll et al_Advancing Assessment for Student Success.indb 67

03-06-2021 07:18:05 PM

68  

advancing assessment for student success

intensely context-specific. They involve concepts that have specific meaning within the filmmaking industry and thus the institution. For most of us these goals are unfamiliar jargon, but for NYFA, they shimmer with meaning and purpose. They are fabulous examples of using goals as organizing principles in student affairs. From their report, it is clear that this process was a deep experience in melding expectations from outside the institution with expectations from within—as well as significant community building around the importance of these goals across the institution. Their next step is to translate those goals into meaningful LOs.

Learning Outcomes Compete It’s worth noting that LOs compete against other organizational frames, and that competition will exist until we have a much better sense of which approach produces the best outcomes. For example, Meyer et al. (2010) described threshold concepts as “transformative, integrative, likely to be, in varying degrees, irreversible, frequently troublesome” and “considered as akin to a portal, opening up a new and previously inaccessible way of thinking about something” (p. ix). Estrem (2015), although acknowledging how helpful LOs have been in organizing curricula on her campus, lamented that this has come at the expense of using threshold concepts for the same purpose. She argues that threshold concepts, at least in the writing curricula, would have been more productive. Ironically and for different reasons, both learning outcomes and threshold concepts seem uniquely slippery with respect to academia’s ability to define and use the terms consistently. I want to close this section by approaching the broader subject from a slightly different angle. Most of us don’t feel warm and fuzzy regarding the “accountability” aspect of outcomes assessment, but we welcome accountability into the work in at least one way. Our LOs are not merely aspirational; we intend to hold our students accountable for the skill and knowledge areas captured therein. One of the least appreciated but most powerful aspects of assessing student LOs is that it forces faculty to approach their teaching from their students’ perspective. We have to be willing to eat what we’ve cooked, rather than just dish it up. Ironically, most faculty we’ve worked with appreciate that challenge once confronted with it. An implicit aspect of this discussion focuses on how we can use LOs to hold ourselves accountable and how our commitment to outcomes obliges us to connect our curriculum and the pedagogies we use to those outcomes. There is a second, and perhaps more subtle, aspect of accountability here as well. Most of us are enthusiastic about the notion of reciprocity in teaching, the notion of partnering with our students in the teaching and learning process. Well, if we hold

Driscoll et al_Advancing Assessment for Student Success.indb 68

03-06-2021 07:18:05 PM

learning outcomes  

69

our students accountable for the learning described in our LOs, then using LOs as organizing principles and holding ourselves responsible for designing and implementing well-developed, excellently scaffolded, pedagogically rich, thoroughly supported curricula is most naturally reciprocal. We’ll shift now from LOs as organizing principles to look at some examples of how they can be used pedagogically to engage students.

LO 5: Strategies for Collaborating With Students in the Design or Assessment of Learning Outcomes Many of us have struggled with how to connect students to CLOs explicitly, rather than leaving them to live forgotten in the syllabus. Using LOs to engage students is a substantial focus in Jankowski and Marshall’s learning systems paradigm (2017); the following anecdotes offer tried strategies. Determined to give them high status and provide a rich experience of the outcomes, a pair of creative faculty members dedicated the first class of their course to the CLOs. The class was divided into groups, one group for each outcome. Groups were instructed to design an extensive advertisement for their outcome. The directions included providing a rationale for the outcome, a preview of how it would be used, its value, and so on. Amy was privileged to observe the student presentations of their advertisements—they were creative, informative, witty, and entertaining, and all about the LOs (Driscoll, personal communication, October 6, 2019). During the presentations, students were urged to make notes about the outcomes right on their syllabus. Soon after this observation, the Center for Teaching, Learning, and Assessment was regularly approached by faculty asking for help in making their course outcomes visible and important to their students. No one was willing to devote a whole class session to that intention, but we developed simple and efficient strategies for the task. One popular process involved faculty passing out the CLOs early in the course and asking students to give their impressions of them. Students rated them using C for confusing, I for important, E for easy (students who used an E rating had to give evidence of their mastery of the CLOs), and other impressions. Faculty found this a great means of getting students interested in the CLOs. In addition, the faculty member could walk through the classroom and have a sense of how students felt about or anticipated the CLOs. Another version of that approach is to ask students to rate their knowledge/ skill/experience with each LO. They were to rate the LO with a 1 if they were confident in their existing knowledge and could help teach to that LO, a 2 if they knew a little or had some understanding, and a 3 if they had no

Driscoll et al_Advancing Assessment for Student Success.indb 69

03-06-2021 07:18:05 PM

70  

advancing assessment for student success

background or experience with the LO. Finally, some faculty had students engage the CLOs in webbing exercises to look for connections between the CLOs and what they were learning in other courses. You can find a diagram of this in chapter 4 (Figure 4.6). The webbing illustrated there could also be used to connect to CLOs in other courses. Coming back to our theme of collaborating with students, I want to talk about one of my favorite ways to engage students in assessment. For many years I taught 1st-year chemistry, and one of the most satisfying ways I used LOs with students was in teaching stoichiometry. Stoichiometry is a confusing stretch for 1st-year students, primarily because it requires them to link together several abstract concepts as they wend their way through lengthy calculations. It is also where students come to grips with the fact that even though we weigh the ingredients for chemical reactions on an analytical balance, the chemistry actually happens between individual atoms and molecules in discrete ratios; in stoichiometry we learn how to go from the balance to individual atoms and back again. After I had given students considerable practice with stoichiometric ­ calculations, but well in advance of the upcoming exam, the students and I would use two LOs to develop part of the rubric for the exam. This accomplished two major goals: It allowed me to see inside my students’ thinking and helped students see inside mine. I’ll come back to the importance of this toward the end of the chapter. I would set up our time together by bringing students’ attention to the evaluation piece of the LO and asking students for their help in developing the rubric for grading the stoichiometry questions on the upcoming exam. “You mean . . . we get to influence how you’ll grade some of the questions?” students would frequently ask. Before putting the problem on the board, I would start by examining aloud the LOs: 1. Students solve stoichiometric problems. 2. Students evaluate solutions to stoichiometric problems.

“Okay folks, let’s think about these two verbs. What does solve mean? The class collectively rolled their eyes as Nathan shouted out, “It means to go into solution—to become surrounded by solvent molecules.” I smiled, “Thank you for that wonderfully context-specific definition, Nathan.” Lydia volunteered, “Well, it’s like developing an answer to a problem.” I agreed. “Excellent, so we have one outcome devoted to developing answers to problems. How about the other verb? What does evaluate mean in general, and what does it mean in the context of what we’re doing here,

Driscoll et al_Advancing Assessment for Student Success.indb 70

03-06-2021 07:18:05 PM

learning outcomes  

71

in this stoichiometry problem?” This was followed by a fairly lengthy pause. After a minute, Jenny saw the “value” in evaluate. “Oh, it’s got to be about measuring the value of something, right? And in this, we’ll have to figure out the correct answer and assign what the different pieces of the answer are worth.” Appreciatively I noted, “Great insight, Jenny. Does anyone have more to add?” Jenny’s insight was very helpful and I made a note to come back to it later in the discussion. I then moved to writing the problem on the board. Moving us along, “Okay, returning to the first outcome, let’s start with a list of the things we need to know before we can solve the problem.” The students and I launched into a long back and forth. I asked them how to engage different aspects of the problem; individuals responded. I queried their thinking repeatedly, making sure the rest of class understood before moving on to the next piece. It is a great way to see into their thinking, and to gently nudge or correct them. I’ll take a moment to remind our readers—there are whole constellations of teaching (and learning) in those little verbs like solve, analyze, synthesize, evaluate. The vast implications of these verbs are a great reminder of how LOs can really help us prioritize and focus our teaching activities with students. I then approached the class. “Okay, now on to the second learning outcome and back to Jenny’s insight on evaluation—for a 50-point problem, how many points do we assign to each piece? Do all of the pieces have the same value, or are some pieces worth more and if so, why? Talk with your neighbors for a couple of minutes and rank the value of the different pieces.” A low hum enveloped the lab. I’ll point out to our readers that this second outcome, to evaluate, requires many different kinds of thinking. The students will have to weigh different concepts and determine which ones are more obvious and which are more subtle. They’ll also have to compare and contrast the different facets of the problem, and they’ll have to consider the effort and prior knowledge that goes into solving each piece. I’ll also note that the students require a fairly deep grasp of these concepts, that the skills embedded in evaluate ask a lot of my students—and that is my purpose. After 5 minutes, I bring them back. Lynn volunteered, “Sheila and I think that all of the pieces deserve some points, but that because the whole problem depends on us getting the ­concept of ‘molar ratio’ correctly, that part deserves the most points.” “Great. How many groups agree with that in principle?” I said. In general there is good agreement across the groups. “So . . . maybe 20 points for that piece and we’ll spread the remaining 30 points across the other pieces?”

Driscoll et al_Advancing Assessment for Student Success.indb 71

03-06-2021 07:18:06 PM

72  

advancing assessment for student success

Being attentive to who is speaking and who is not I asked, “How about you, Mason (who didn’t have his hand up), how would you add to this?” Mason chimed in, “Well, Maggie and I agree, but some of the other pieces are more difficult than others; for example, we had to calculate the molar mass of ethanol, rather than just look it up, so it deserves more points than the things we just had to look up.” Many heads were nodding in general agreement. I agreed. “Great, how about the setup; how many points do we award to it? Lynn jumped back in. “I hate it when I get a lot of the thinking correct, but flub the calculation, and then get almost no partial credit. We have to give the setup a substantial number of points.” There was general agreement. We then assigned the points to each piece on the board and I took a photo to use when I graded the midterm later that month. I’ll draw our readers’ attention back to how I’ve used these short LOs with two key verbs, solve and evaluate, to shape class time with my students. Rather than lecturing, I’ve pulled from their minds and experience, given them agency, and let the students do most of the teaching.

Epilogue I can’t stress enough the importance of using LOs to see inside my students’ thinking. If I can’t do this, I can’t see where or if they are going astray. Are their assumptions incorrect, are they missing key concepts, or are their biggest stumbling blocks simple computational errors? There are several other important pieces here. This LO focused on evaluation, and the teaching that goes along with it, gives students insight into faculty thinking and practice and to the much larger challenge of addressing the question “How do we go about assigning value to something?” A process like this gives students agency because it allows students to see that I value their thinking and that their thinking has value (as opposed to only my thinking). Their generation will soon be running the country, and I want to communicate to them explicitly that their thinking is important. Their ability to do this successfully won’t come from their ability to sit, listen, and passively take notes, but instead from their ability to think and solve problems. Finally, early in my chemistry teaching career I had a student explain to me that she was much more able to learn chemistry from the students seated next to her than from me. I appreciated her candor and courage in saying that to me. Students are hugely aware of the power, generational, and social strata between faculty and themselves, which typically do not help them

Driscoll et al_Advancing Assessment for Student Success.indb 72

03-06-2021 07:18:06 PM

learning outcomes  

73

learn. Settings like this allow for a lot of student teaching, in a way that I can gently correct for ­mistakes and flawed thinking. However, all of these practices hang on my daily use of the course LOs as we engage important chemistry concepts and skills.

Conclusion Subject-based courses are to direction as LO-based courses are to destination. It’s easy to head in a direction, but hard to know if you’ve arrived. LOs provide an explicit destination to our endeavor, a place on the horizon on which to set our compass, a place toward which what we do, and how we do it, are bound. For our students, outcomes provide transparency. Instead of spending their energies having to intuit faculty expectations, explicit expectations allow students to focus their efforts on mastering skills and content areas important for their success. I hope that the ideas I’ve presented here will help you and your colleagues use LOs to guide and shape your curricula, to chart courses and programs, and to communicate what you actually want students to know and be able to do. Above all, I hope the process will be rewarding to both you and your students.

References Adelman, C. (2015). To imagine a verb: The language and syntax of learning outcomes statements (Occasional Paper No. 24). University of Illinois and Indiana University, National Institution of Learning Outcomes Assessment (NILOA). https:// files.eric.ed.gov/fulltext/ED555528.pdf Adelman, C., Ewell, P., Gaston, P., & Schneider, C. G. (2014). The degree qualifications profile. Lumina Foundation. www.DegreeProfile.org American Association of State Colleges and Universities. (2016–2018). Reimagining the first year of college. https://aascu.org/RFY/ Barr, R. B., & Tagg, J. (1995). From teaching to learning: A new paradigm for undergraduate education. Change, 27(6), 13–26. https://doi.org/10.1080/0009 1383.1995.10544672 Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. Green. Brookfield, S. (2012). Teaching for critical thinking. Jossey-Bass. Carter, M. (2007). Ways of knowing, doing, and writing in the disciplines. C ­ ollege Composition and Communication, 58(3), 385–418. https://www.jstor.org/­ stable/20456952 Degree Qualifications Profile. (n.d.). What is tuning. https://www.learningoutcomesassessment.org/dqp/

Driscoll et al_Advancing Assessment for Student Success.indb 73

03-06-2021 07:18:06 PM

74  

advancing assessment for student success

Driscoll, A., & Wood, S. (2007). Outcomes-based assessment for learner-centered ­education: A faculty introduction. Stylus. Estrem, H. (2015). Threshold concepts and student learning outcomes. In L. ­Adler-Kassner & E. Wardel (Eds.), Naming what we know: Threshold concepts and writing studies (pp. 89–104). Utah State University Press. Ewell, P. (2016). Improving with age. Inside Higher Ed. https://www.insidehighered .com/views/2016/04/07/essay-value-student-learning-outcomes-measuring-andensuring-academic-quality Ewell, P., Hutchings, P., Kinzie, J., Kuh, G., & Lingenfelter, P. (2017, April). ­Taking stock of the assessment movement—“Liberal Education,” Winter, 2017. University of Illinois and Indiana University, National Institute for Learning Outcomes ­Assessment (NILOA). https://www.learningoutcomesassessment.org/wp-content/ uploads/2019/08/Viewpoint-Ewelletal.pdf Fink, L. D. (2003). Creating significant learning experiences. John Wiley & Sons. Hirashiki, J. (2019). Creativity as a result of cross-disciplinary assessment (Final Project Report). Assessment Leadership Academy. Hutchings, P. (2019, February). Washington State University: Building institutional capacity for ongoing improvement. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www .learningoutcomesassessment.org/wp-content/uploads/2019/08/WSUCaseStudy.pdf Inoue, A. B. (2019). Labor-based grading contracts: Building equity and inclusion in the compassionate writing classroom. WAC Clearinghouse and University Press of Colorado. https://wac.colostate.edu/books/perspectives/labor/ Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Stylus. Komives, S. R., & Schoper, S. (2006). Developing learning outcomes. In R. P. ­Keeling (Ed.), Learning reconsidered 2: Implementing campus-wide focus on the ­student experience (pp. 17–42). Human Kinetics. Lakoff, G., & Johnson, M. (1999) Philosophy in the flesh—The embodied mind and its challenge to Western thought. Basic Books. Maki, P. L. (2017). Real-time student assessment: Meeting the imperative for improved time to degree, closing the opportunity gap, and assuring student competencies for 21st-century needs. Stylus. McKiernan, H. H., & Birtwhistle, T. (2010). Making the implicit explicit: Demonstrating the value added of higher education by a qualifications ­ framework. ­ Journal of College and University Law, 36(2), 510–560. https:// connectingcredentials.org/wp-content/uploads/2015/02/Making-theImplicit-Explicit-­Demonstrating-the-Value-Added-of-Higher-Education-by-aQualifications-Framework.pdf Meyer, J. H. F., Land, R., and Baillie, C. (2010). Preface. In J. H. F. Meyer, R. Land, & C. Baillie (Eds.), Threshold concepts and transformational learning (p. 9). Sense.

Driscoll et al_Advancing Assessment for Student Success.indb 74

03-06-2021 07:18:06 PM

learning outcomes  

75

Montenegro, E., & Jankowski, N. (2020, January). A new decade for assessment: Embedding equity into assessment praxis (Occasional Paper No. 42). ­University of Illinois and Indiana University, National Institution of Learning ­Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/ wp-­content/uploads/2020/01/A-New-Decade-for-Assessment.pdf Shulman, L. (1986). Those who understand: Knowledge growth in teaching. ­Educational Researcher, 15(2), 4–14. https://doi.org/10.3102/0013189X015002004 Terry, P. (2015). Assessment: Learning by doing. Assessment Update, 27(4), 1–2, 16. https://doi.org/10.1002/au.30025 University of San Francisco (2014). Institutional learning outcomes. https://www .usfca.edu/sites/default/files/images/attachment_09_institutional_learning_ outcomes_undergraduates.pdf Wiggins, G., & McTighe, J. (2005). Understanding by design. Association for Supervision & Curriculum Development. Wolk, R. A. (2015, March). Competency-based education is working. Education Week, 34(24), 28–30. https://www.edweek.org/ew/articles/2015/03/18/­ competency-based-education-is-working.html Zull, J. (2002). The art of changing the brain. Stylus.

Driscoll et al_Advancing Assessment for Student Success.indb 75

03-06-2021 07:18:06 PM

4 ALIGNED AND COHERENT A S S E S S M E N T, P E D A G O G Y , AND CURRICULUM Connections for Student Success Amy Driscoll

I

n the early 1990s, I worked with eight faculty who were pioneering and experimenting with community-based learning, or service-learning, as labeled on some campuses. We spent a year examining aspects of their courses and analyzing the impact of the community work. In the process, I designed a mapping approach to focus on what course components were influenced by students working and learning in the community. It was a simple grid but it helped our understanding of community-based learning. Ten years later, at California State University, Monterey Bay (CSUMB), I translated that same idea into course alignment maps. They were an effective way to engage faculty who weren’t ready or interested in “doing assessment.” Faculty were encouraged to engage with the grid to analyze their courses, and they found it efficient and informative. I assured the faculty that the resulting grid was “for their eyes only.” We had almost 100% participation the 1st year, and faculty discussed the learning that emerged from those grids: “I had just revised my course and the map affirmed that it was focused as I intended.” “The map made me realize that I wasn’t using much time for an important outcome.” “After completing the map, I made some changes in my course so that it would reflect my priorities better.” 76

Driscoll et al_Advancing Assessment for Student Success.indb 76

03-06-2021 07:18:06 PM

aligned and coherent assessment, pedagogy, and curriculum  

77

As you probably know, the alignment map soon became a popular tool for programs or majors to analyze whether their courses were aligned with their program outcomes. Initially, faculty and chairs or deans spent time together analyzing programs using the grids or maps. Insightful conversations emerged from the work. Unfortunately, within a few years, the maps were often completed by one person, the chair or a person in charge of program review. Mapping became an efficient process with little reflection or dialogue. The maps also became a staple of accreditation reports and self-studies. About this time, I became disenchanted with current curricular mapping—­observing its limitations and the absence of much rich information. It had become a very limited tool in terms of value for program analysis. However, the tool did turn academic attention to the importance of alignment, and institutions began to examine the relationship between programs and mission, programs and courses, and assessment design and outcomes. At the same time, I began seeking maps with more information and deeper analysis of the information. I experimented by combining mapping with other strategies. I begin this chapter with a continued yearning for more innovative maps and urge you, our readers, to seek with me.

Introduction and Definitions Before outlining the chapter, and just to be sure that we are speaking the same language—so important in assessment today—I pose some definitions and have an introductory preview. The first definition for this chapter is for alignment. Pat Hutchings (2016) told us that alignment “refers to the linking of intended student learning outcomes with the processes or practices needed to foster those outcomes” (p. 5). For this chapter, alignment is also used more loosely to describe all kinds of linking in higher education. You will see that possibility in many of the examples that follow in this chapter. Attention to alignment has become prominent as well as significant for both student and faculty success. Let me emphasize that I consider alignment a critical piece of the assessment picture, even with my disappointment in mapping tools. What is exciting at this time is that alignment has the potential to become more valuable to the planning for student learning. Hopefully, that potential will be enhanced with the diverse and creative use of mapping tools that I will explore and describe in this chapter. I intend to go beyond the use of mapping for individual courses and provide strategies that I use to expand its value for program planning and educational planning in general. My strategies require faculty and students to engage and collaborate to achieve that value. We have just begun to discover the potential for student engagement

Driscoll et al_Advancing Assessment for Student Success.indb 77

03-06-2021 07:18:06 PM

78  

advancing assessment for student success

with alignment grids for both individual courses and programs, and I include an exciting case of such engagement later in this chapter. After my discussion of alignment and mapping, I move to the concept of coherence in learning experiences, which is the other important concept of this chapter. Loosely defined, coherence means “united and forming a whole,” “logical and consistent,” and “logically ordered and integrated.” When we check for alignment, we are initiating a look at coherence because alignment is part of that ordering and provides consistency. It predicts the possibility of coherence but it is not enough. From there, we must analyze the other components that are aligned: pedagogical practices within our courses, curriculum within our programs, cocurricular learning outcomes in activities, and other campus learning experiences to determine whether there’s a coherent whole. Starting with a broad understanding of alignment first, we will proceed to finding the connections that indicate coherence. Ultimately, we want those connections to be visible to our learners and to support them as they create coherence with some of their own connections. We provide examples and strategies for making that happen, for students to have agency in their learning. When we assess the coherence of our programs, we are deepening our insight into our students’ perspectives. We must ask, “Will students experience the programs we are analyzing as a meaningful whole?” Finally, I see alignment and coherence as two sides of a coin: Alignment stays on the surface, relying on course syllabi, program descriptions, and outcomes; coherence relies on deep analysis of practices, specific curriculum, and student perceptions. That’s where we are going in this chapter. We will begin simply by defining and describing curriculum mapping and move to an expansion of its use, adding detail to alignment. From there, we will turn over the coin and examine coherence to make sure that we have connections and make them visible and supportive of student learning.

Curriculum Mapping for Alignment and Beyond Curriculum maps are grids of a 2-dimensional matrix in which one dimension represents outcomes (course, program, institutional) and the other dimension represents practices to achieve the outcomes. Within each cell of the grid are indications of whether the outcome is addressed. Figure 4.1 is a sample course alignment grid, an example from our previous book of a grid or map. The simple grid was our starting point for curricular mapping efforts at CSUMB. It’s a powerful, easy, and efficient strategy for individual faculty use, with valuable possibilities for student use. If you have not used it for your courses, we urge you to try it and see what you learn from this simple strategy.

Driscoll et al_Advancing Assessment for Student Success.indb 78

03-06-2021 07:18:07 PM

aligned and coherent assessment, pedagogy, and curriculum  

79

Figure 4.1.  Course alignment grid or map. Course information: Professor: Grid B

Outcome 1

Outcome 2

Class 1

X

X

Class 2

X

X

Class 3

X

X

Class 5

X

Outcome 3

Outcome 4

Outcome 5

Outcome 6

X

Class 10

X

X

Class 11

X

X

X

X

Class 12

X

X

X

X

Class 20

X

X

Class 21

X

X

Class 22

X

X

Class 23 Class 28

X

Class 29 Class 30

X

X

X

X

X

X

X

X

X

X

X

Reading B

X

Reading C

X

Reading D Text Selections

X

X

Assignment 1

X

X

Assignment 2

X

X

X X

X

X

X

X

X

X

X

X

Assignment 3

X

Assignment 4

X

X

Assignment 5

X

X

X

Assessment 2

X

Assessment 3 Assessment 4

Driscoll et al_Advancing Assessment for Student Success.indb 79

X

X

Reading A

Assessment 1

X

X

X

X X

X X

X

X

X X

03-06-2021 07:18:07 PM

80  

advancing assessment for student success

Some of the early mapping examples began to expand the information in each grid slightly beyond a check. Together with Mary Allen (personal communication, May 30, 2006) and assessment directors around the country, we expanded those checks to indicate levels of address, such as P for primary content, S for secondary content, or I for intro, P for practice, and C for competence. Some programs used numerical ratings to indicate importance. Those added value to the information gained from the maps. In his chapter 3 on outcomes, Swarup Wood suggests that grids for course alignment or program alignment could include specific pedagogy in the vertical column that would enable us to determine which pedagogy is most often used for certain kinds of outcomes. We do not have an example but several of our grids could have an additional column to use for that addition of pedagogy. Pat Hutchings did us a huge favor with her NILOA Occasional Paper No. 26, Aligning Educational Outcomes and Practices (2016), in which she reviewed the use of mapping across the United States as well as in other countries. My enthusiasm returned when I heard her descriptions of the following: •• Institutions mapping their own outcomes against the intended outcomes of the Degree Qualifications Profile (DQP) •• Institutions mapping their own outcomes with the Essential Learning Outcomes from the Association of American Colleges & Universities (AAC&U) •• Institutions mapping their own outcomes with both the DQP and Essential Learning Outcomes, and LEAP (Liberal Education and America’s Promise) (see Figure 4.2) •• Institutions mapping their own institutional outcomes or program outcomes with those of professional disciplinary outcomes •• Institutions mapping program outcomes with samples of student work and evaluation criteria •• Institutions mapping the outcomes of student affairs services with the actual services and programs offered Figure 4.2 provides an ambitious grid that demonstrates the alignment of an institution’s core values and general education (GE) program with LEAP, DQP, and the VALUE rubric (a nationally developed rubric designed for the LEAP Essential Learning Outcomes) Figure 4.2 demonstrates an institutional use of the alignment grid as a way to demonstrate connections with national models and frameworks. Figure 4.3 is an exciting example that represents the advancing movement in higher education for student affairs assessment processes. The grid

Driscoll et al_Advancing Assessment for Student Success.indb 80

03-06-2021 07:18:08 PM

Intellectual and practice skills

LEAP

Driscoll et al_Advancing Assessment for Student Success.indb 81

Use of information resources

Teamwork and problemsolving Engaging diverse perspectives

Quantitative fluency

Information literacy

Oral communication

Quantitative literacy

Analytic inquiry

DQP – Subcategory

Communication fluency

Intellectual skills

Specialized knowledge

DQP

Written communication

Creative thinking

Critical thinking

Inquiry and analysis

LEAP Subcategory & VALUE Rubric

Intercultural knowledge and competence

Information literacy

Quantitative literacy

Ethical Reasoning

Critical thinking

Oral communication

Written communication

Inquiry and analysis

None (discipline specific)

VALUE Rubric

Figure 4.2.  Alignment of institution with national models and frameworks.

A1, A2, A3, A4, C3

FYS, B2, D2

B4

A1, A2, A3, A4

A4 MLOs

CSUMB GE

(Continued)

Multicultural and global perspectives

Ethical reflection and practice

CSUMB Core Values

aligned and coherent assessment, pedagogy, and curriculum  

81

03-06-2021 07:18:08 PM

Driscoll et al_Advancing Assessment for Student Success.indb 82

Knowledge of human cultures and the physical and natural world

Integrative learning

Personal and professional responsibility

LEAP

Integrative Learning

Lifelong learning

Applied Learning

Broad integrative knowledge

Lifelong learning

Creative thinking

Critical thinking

Technological sophistication

Collaboration

Teamwork and problemsolving

Interdisciplinarity

Ethical reflection and practice

Service-learning

Multicultural and global perspectives

Service-learning

CSUMB Core Values

Applied, active, and project-based learning activities

B1, B2, B3, C1, C2, D1, E

A1, A2, A3, A4

C3, D3, D4

D2, D3, D4

CSUMB GE

Problem-solving

Integrative learning

Ethical reasoning

Civic engagement

VALUE Rubric

Ethical reasoning

DQP – Subcategory

Intercultural knowledge and competence

Civic learning

DQP

Intercultural knowledge and competence

Civic engagement

LEAP Subcategory & VALUE Rubric

Figure 4.2.  (Continued)

82   advancing assessment for student success

03-06-2021 07:18:08 PM

aligned and coherent assessment, pedagogy, and curriculum  

83

illustrates the student affairs SLOs that have been adapted from the Council for Advancement of Standards (CAS) and provides samples of the curricular provisions offered to students to help them achieve their outcomes. Their outcomes provide a look at the kind of support for student success that can ideally be partnered with academic affairs. While at CSUMB, I worked for a few years with student affairs personnel to map their services to general education outcomes. We found numerous opportunities to support general education with their cocurricular programs and noted connections between each program’s outcomes, followed by important conversations between academic and student affairs faculty and staff. That’s what I look for in the use of mapping and the possibility of connections for students. One more example of an institutional use of mapping comes from the University of San Francisco. Their intention was to develop institutional learning outcomes (ILOs) that were aligned with their mission. Notice that there are some goals and outcomes that are specific to the university location, population, and culture. They are all institutional learning outcomes and they uniquely represent the university. In chapter 3, Wood talks about goals and outcomes and the importance of mission. Figure 4.4 illustrates an authentic representation of mission in institutional outcomes. From the mapping variations which Hutchings reviewed, institutions and individual programs discovered gaps, overlaps, need for revision, guidance for comprehensive program redesign, as well as assurance and confidence in the alignment represented in the maps. Most of the uses she reported were major processes involving many important stakeholders. They were not “checking the box” exercises. They were processes characterized by discussion, analysis and consensus, and ongoing collaboration. A study by Sumison and Goodfellow (2004) affirms the importance of collegial work for alignment purposes. Their ambitious group in Australia used an elaborate mapping process to determine if generic skills (e.g. critical thinking, information literacy, communication, etc.) were integrated into a disciplinary major program of studies. They experimented with varied levels of pedagogy to use in their grids. Their study group focused on the levels of what was taught, practiced, and assessed. What is important to acknowledge in their study is that the researchers noted different levels in the way faculty approached the task of checking off their courses—real differences in reflexivity and, consequently, differences in reliability. They emphasized the importance of trust and respect in the culture for mapping curriculum. Power (1994) expanded that importance with his urging for “a climate of facilitation, trust, autonomy, and transparency in which collegial dialog is valued” (p. 45) and, of course, discouraged

Driscoll et al_Advancing Assessment for Student Success.indb 83

03-06-2021 07:18:09 PM

Driscoll et al_Advancing Assessment for Student Success.indb 84

03-06-2021 07:18:09 PM

X

Legal Services

X

X

X

Student Conduct

X

X

Recreational Services X

Money Mgmt.

Note. The student LOs are adapted and used with permission from Student Affairs Assessment, University of Kansas, Kevin Joseph, director.

† organization, utilization of resources, etc.

Students analyze the congruence between personal and community values

Students use competencies needed to connect to the global community

Students develop practical skills† to achieve academic goals.

Students use skill and knowledge to handle legal issues.

Students develop healthy relationships with others.

Students can design or plan strategies to reach personal financial goals

Student Affairs/Student Learning Outcomes

Figure 4.3.  Alignment grid for student affairs curriculum.

X

X

Counseling

X

X

X

Student Housing

Driscoll et al_Advancing Assessment for Student Success.indb 85

03-06-2021 07:18:09 PM

Jesuit value of social justice

The core mission of the University of San Francisco is to promote learning in the Jesuit Catholic tradition. The University offers undergraduate, graduate, and professional students the knowledge and skills needed to succeed as persons and professionals, and the values and sensitivity necessary to be men and women for others.

Engagement in the diversity of the campus community and the San Francisco Bay Area

The University will draw from the cultural, intellectual, and economic resources of the San Francisco Bay area and its location on the Pacific Rim to enrich and strengthen its educational programs.

Students describe, analyze, and appreciate the interconnectedness of social, economic, and political systems that shape diverse groups within the San Francisco Bay area, the Pacific Rim countries, and the world. (Critical Thinking)

Students use multiple methods of inquiry and research processes. (Critical Thinking, Quantitative Reasoning, Information Literacy)

Students use technology to access and communicate information in their personal and professional lives. (Information Literacy)

Students communicate effectively in written and oral forms to interact within their personal and professional communities. (Written/Oral Communication)

Students construct, interpret, analyze, and evaluate information and ideas derived from diverse sources. (Critical Thinking, Quantitative Reasoning, Information Literacy)

Students explain and apply “threshold concepts,” practices, and ethics of their chosen academic discipline in diverse communities. (Critical Thinking)

Students analyze and adjust their attitudes, beliefs, values, and assumptions to reflect “cultural humility” as they engage with diverse communities and cultures to contribute to the public good.

Institutional Learning Outcomes (ILO’s)

Note. Used with permission from the University of San Francisco, Shirley McGuire, provost.

Knowledge of the interdependence and cultures of the Pacific rim

Scholarship and academic rigor

The University will distinguish itself as a diverse, socially responsible learning community of high quality scholarship and academic rigor sustained by a faith that does justice.

Global understandings

Personal and professional knowledge, practices, and ethics

Institutional Learning Goals

Mission Statement

Figure 4.4.  Alignment of mission and institutional learning outcomes, University of San Francisco.

86  

advancing assessment for student success

curriculum mapping as a “passive process” (p. 48). Overall, the researchers affirmed that collegiality, collaborative meaning-making, and curriculum mapping can be a precursor to curriculum change. They also acknowledged the shared understandings that emerged from an alignment process. Unfortunately, these affirmations take me back to a concern that I voiced in the first chapter—the need for substantial time for our assessment processes. I recommend sufficient allocation of time for curriculum mapping to find and expand real value in the process. Again, if you have not used any ­curricular mapping, perhaps you could gather a few of your colleagues, share your course maps, and have a discussion about what you find in the maps. I predict the kind of conversations we hope for in higher education but have seldom experienced.

Expanding the Value of Curricular Mapping The first time I used curricular mapping to look at the alignment of ­programs, I used giant whiteboards or chart paper hung on a wall. Faculty gathered round the sheets on which the program outcomes were written on top of the grid. I insisted that individual faculty use course alignment grids to analyze their own courses before approaching the task of placing their courses on the grid. I had previously noted that faculty who had analyzed their own courses first were much more deliberate and engaged in discussion when participating in the program mapping.

Listening to Faculty Once all faculty had completed placing their course information on the ­program grid, it didn’t take long before they were asking each other ­questions about their courses or expressing surprise. “I didn’t know that you worked on that outcome in your course—I do too. I’d be interested in hearing about how you teach and how you assess student work for that outcome.” “Maybe we should move that outcome out of my course and I’d have more time to work on other outcomes. Would that work for you?” “How do students handle that many outcomes in your course? You will have to share your ideas with me.” In the fast pace of assessment work, I want to hear those conversations often. In chapter 7, Dan Shapiro describes a variety of ways to initiate and support

Driscoll et al_Advancing Assessment for Student Success.indb 86

03-06-2021 07:18:09 PM

aligned and coherent assessment, pedagogy, and curriculum  

87

that talk. Unfortunately, just “get the grids filled” became the goal for a while. So before approaching the program alignment grid these days, I hang sheets with each program outcome on a separate sheet and ask faculty to do a few tasks: 1. Rate each outcome’s importance in the program with a 1 for an outcome that only needs to be in one course, a 2 for outcomes that should be addressed in more than one course, and a 3 for an outcome that should be threaded throughout the entire program (probably all courses). 2. For each outcome, suggest a pedagogy or several that are appropriate for supporting students to successfully achieve the outcome. 3. For each outcome, suggest an assessment or two that would provide information about whether the student achieves the outcome and how well. 4. For each outcome, rate students’ competence at the beginning of their programs, midway through the program, and at the completion of the program. 5. For each outcome, rate the importance of the outcome for students’ career possibilities.

There are more possibilities, but the thinking required by those ­exercises gets faculty talking about their program and their courses. I had not intended to ask faculty to do all five tasks, but recently I worked with faculty in a business program that had completed a curricular map for their program. They weren’t satisfied with their map and wanted more information for their program self-study. I did engage the faculty with all five tasks. When the morning and our conversation ended, several participants said that they now knew their program so much better. I watched as they carefully folded the sheets to take to their program office to bring to their next meeting. I wished I could follow those sheets to observe their future use as faculty analyzed their program. I also worked with faculty who taught in a public art program—­faculty who insisted that they addressed all of their program outcomes in every course and didn’t want to participate in any of the assessment processes. When they did agree, they each completed the course alignment map for their own courses. Then they completed the program grid on a huge whiteboard on the wall. Their checks reflected that they did indeed address each outcome in every course. There was an exhilarated “told you so,” but little discussion followed. There was lots of room in the cells, so I pushed the alignment a bit by asking them to describe major pedagogies for each course

Driscoll et al_Advancing Assessment for Student Success.indb 87

03-06-2021 07:18:09 PM

88  

advancing assessment for student success

and the major form of assessment in each course. Then I couldn’t get a word in. They shared ideas for both pedagogy and assessment, checking alignment as they proceeded. Time flew. They finished with a decision to share their experience in a conference presentation about their alignment work and took photos of the whiteboard information. Last I heard, it was a “standing room only” session at the Assessment Institute in Indianapolis.

Students and Curricular Mapping For a while those curricular maps seemed to be one of those faculty secrets, much like the criteria we employ to review student work used to be before rubrics. However, I recently heard of an institutional effort to “validate curricular maps for undergraduate education” at the University of California, Los Angeles (Hackett et al., 2019). To do so, faculty and students studied how specific courses contribute to achieving the program learning outcomes (PLOs). The first step in the process was to put all curricular maps online for viewing. For students’ ratings, a template was developed to illustrate a typical path for major programs of study, with outcomes across the top of the template and courses in the vertical path at the side of the template. At completion of their major, students were asked to complete the map that represented their studies with an I for initiated, D for developed, and M for mastered. Those ratings were later converted to numbers (I = 1, D = 2, M = 3) so that the data could be summarized to represent the entire group of graduates. A map from students finishing an undergraduate neuroscience program is provided in Figure 4.5 for you to analyze. It would be interesting to compare that map with one developed by the faculty in the neuroscience program (future study for our readers). For now, I will guide your attention to a few data examples that should raise questions, among the wealth of data in the students’ maps. For example, capstones, which typically are a final assessment designed to provide summary data, don’t achieve level 3 (mastery). Or if a student doesn’t choose certain electives, they will miss achieving level 2, developing the outcome. Looking at the grid, I’m tempted to ask, “What appears to be the most important outcome? And the least important outcome?” or “What is the most welldeveloped outcome? And the least developed outcome?” And I can’t resist asking, “What changes would you make using this grid of information?” Fortunately, faculty were asked that last question and urged to consider curricular or programmatic changes based on those student maps. Their work provides a good example of using student evidence for improvement. I have often considered curriculum maps a form of evidence for making early improvements, changes in your course before teaching or redesigning a

Driscoll et al_Advancing Assessment for Student Success.indb 88

03-06-2021 07:18:09 PM

aligned and coherent assessment, pedagogy, and curriculum  

89

Figure 4.5.  Curriculum mapping by neuroscience students.

program before offering it to students. That assumes that the maps have been done authentically and with much reflection. Dan has often expressed a concern about the process of completing an alignment grid and the potential for a lack of reflection in the mapping process. To achieve authenticity, he suggests completing the maps with an is and an ought to distinguish maps that accurately describe the current state of the program from curriculum maps that describe how they want or intend the curriculum to be. Such authentic maps provide opportunities that can then be prioritized for attention.

Driscoll et al_Advancing Assessment for Student Success.indb 89

03-06-2021 07:18:24 PM

90  

advancing assessment for student success

With student use of those maps, you will have another layer of evidence to integrate with student learning evidence to make improvement decisions. You may also have information in the maps to explain the strengths and low points of student achievement evidence. I find the UCLA example a powerful model for engaging students in a study of alignment, in examinations of their own learning, and in discussions of pedagogy and assessment. As I said earlier, the main concern in all of the mapping examples is that students have new awareness to answer such questions as “I don’t know why we have to take this course—not learning anything for my major.” The UCLA project we described earlier in which students complete a mapping grid to illustrate their perceptions of alignment or its lack in their program will also address the lack of authenticity in a local grid. We can remind each other that we want to provide a coherent program, and alignment is essential. We turn to alignment for continued discussion of its importance, of kinds of alignment, and of faculty and student roles in higher education’s alignment before moving on to coherence.

Importance of Alignment Although mapping has taken a back seat in most recent assessment materials, alignment has moved to the front and is getting attention from top assessment experts. In her recent book Real-Time Student Assessment, Peggy Maki (2017) described five “Core Learner-centered Commitments” that she considered essential to an institution’s capacity to offer coherent degrees, to be informed about students’ progress, and most importantly, to “close the achievement and graduation gaps between historically represented and underrepresented groups of students” (p. 59). Her Commitment 4 is “alignment of courses, educational experiences, and assignments with outcomes and standards and criteria of judgement” (p. 68). In a compelling description, she tells us what can happen to students who do not experience or perceive the alignment stated in her commitment: If students do not see how the components of their education fit together or contribute to general education and their program-level outcomes, they are likely to view their education as a process of traveling through silos, some of which may seem irrelevant to them. They may view certain courses or experiences solely as a means to amass credits or hours toward fulfilling degree requirements. (p. 68)

It is especially important that students see the alignment between assignments and the rubrics used with them. That alignment has the potential to “broaden and deepen students’ understanding of the relevance

Driscoll et al_Advancing Assessment for Student Success.indb 90

03-06-2021 07:18:24 PM

aligned and coherent assessment, pedagogy, and curriculum  

91

and application of outcomes” (p. 69). In a major multistate pilot project, it was found that assignments not aligned with a rubric resulted in low achievement by students (p. 69). That should not surprise us. However, it should also not surprise us that our students remain unaware of alignment unless we make it visible, draw attention to it, and describe its rationale for their learning. One of CSUMB communities of practice studied “holistic alignment” for their collaborative project. They described holistic alignments as those involving the entire institutional learning system. An important example is the connection between general education and the major programs of study. That alignment is critical to holistic quality and is then complemented by the connections between academic and cocurricular programming for a more complex picture of student learning. Such alignment is an ongoing process and requires the active involvement of both internal and external stakeholders. Members of the community of practice studied alignment intensely and developed a framework of core activities (see https://digitalcommons. csumb.edu/ulos/). Their project was influenced by the work of Jankowski and Marshall, whose ideas we explore in the next section.

Moving From Alignment to Connections for Coherence In Degrees That Matter: Moving Higher Education to a Learning Systems Paradigm, Natasha Jankowski and David Marshall (2017) devoted two chapters to the topic of alignment. They went beyond typical alignment examples to curriculum alignment with the needs of the economy or for community colleges and transfer institutions to be aligned for smoother transfers for students. The authors posed the possibility that the assumed alignment of outcomes and learning experiences may not actually exist, much like the concern that Shapiro raised earlier in this chapter. We need to promote both care and collaboration in our alignment process. The authors broadened alignment to relationships among all the elements of a learning system, a thinking that we affirm throughout this book. Those elements can include • Coherent curricular design; • Clear and meaningful outcomes or competencies; • Intentionally designed and engaged student experiences in both academic and cocurricular programs; • Collaboration within and between internal and external partners; • Policy making; • Communication within and between programs (academic and cocurricular); and • Consensus about intentions. (pp. 52–53)

Driscoll et al_Advancing Assessment for Student Success.indb 91

03-06-2021 07:18:24 PM

92  

advancing assessment for student success

Their definition is accompanied by two recommendations: First, identify the various elements of the learning system in your institution, and second, engage in discussion, reflection, and consensus to explore and make explicit those relationships. We cannot expect students to see those relationships if we haven’t explored them ourselves. In a learning system in which those discussions and reflections have taken place, alignment means that students will understand why they take specific courses and how that course is related to other courses, and even the relevance of program outcomes. I feel certain that most of us want that awareness and understanding for our students. That desire leads us to talk about coherence in what and how we teach, what and how we assess, and what connections are obvious to students.

Connections in Learning Experiences: Coherence I want to get you started with some easy-to-use strategies to identify ­connections in learning experiences. They will require faculty collaboration, of course, and will be enhanced by student collaboration. After a faculty discussion focused on alignment within a program, a logical next process is a conversation about how the courses are connected.

Listening to Faculty A webbing strategy illustrated in Figure 4.6 makes connections visible and open to analysis within your course. It could also be used to prompt a conversation among faculty who also teach the course or for students who take the courses. Once the web lines are drawn, I hear faculty conversations: “What do you do in your course to help students understand that concept?” “Tell me about the readings you require. I may be able to make ­connections with some of mine.” “Do your students ever notice or pay attention to those ­connections?” “Why don’t we set aside a time to describe our courses to each other to see where there might be connections?” Dan Shapiro, our coauthor, would be thrilled to hear that last question. And as Jankowski and Marshall urged, these conversations will take faculty awareness to a place where they can include students in the “making connections” discussion. A former colleague at CSUMB had his students engage

Driscoll et al_Advancing Assessment for Student Success.indb 92

03-06-2021 07:18:24 PM

aligned and coherent assessment, pedagogy, and curriculum  

93

Figure 4.6.  A webbing strategy for course connections. Theory of Love

Meditation

Health

Intuitive Eating

Flourishing Optimal Human Functioning

Making Friends With Food

F.Y.S. Neurobiological Approaches to Eating, Learning

Critical Thinking Intro to Academic University Life

Information Literacy (locating & using electronic resources)

Lifelong Inquiry About Self

Note. F.Y.S.—First-Year Seminar.

in a webbing exercise at the beginning of his courses in which they predicted connections within their courses. Midway through the course, it was enlightening to have those students confirm the connections, or decide that the connections weren’t appropriate and/or make new connections. It also provided a framework for students to include connections to other courses. Study Figure 4.6 for a course that Swarup Wood currently teaches. You will probably be tempted to register for the course.

04_DRISCOLL_C004.indd 93

05-06-2021 06:04:34 PM

94  

advancing assessment for student success

Another interesting assignment has students draw Venn diagrams in which they show the connections within their course, and it could be expanded to include connections with another course.

Listening in Classrooms I encourage simple approaches for you to use throughout a course during end-of-class sessions. Listen to the following ideas: Reflect on what we discussed or analyzed today, and find a connection between today’s conversations and thinking in another course in your program. (Students could respond orally, individually, in groups, or in writing or journals.) Think of an understanding or strategy you learned or practiced in another course in your program that helped you achieve the outcomes in this class. Write a note to a faculty member who teaches another course in your program, thanking them for teaching a skill or knowledge that you needed in this course. Or suggest that they spend more time on the skill or knowledge so that you can be more successful. Or suggest that they include some introductory learning related to the outcomes of your current course. (This information could be shared with other faculty, with student permission to do so, and assuring anonymity.)

Any of these questions or assignments could be used with a very fast classroom assessment technique (Angelo & Cross, 1993)—like a 1-minute paper about the connections in curriculum. Or you could adapt one of the many techniques in Barkley and Major’s (2016) more recent Learning Assessment Techniques. You could do the same thing with course readings or course assignments. And you could vary the language with different kinds of connections.

Listening to Assignment Directions Nelson Graff encourages his students to make regular connections using technology. He asks students to compose a weekly “transfer blog” to describe those connections. Listen as he explains the blog assignment: In educational research, we describe the “transfer of learning” as applying learning from one context to another. Yet this is not a simple process, like picking up a hammer and hitting nails in one location rather than another. [At this point, a yellow and brown hammer appears.] Anyone who uses hammers knows, the kind of hammer one uses in one situation may differ from the kind one uses in another, for example . . . [More hammers appear,

Driscoll et al_Advancing Assessment for Student Success.indb 94

03-06-2021 07:18:33 PM

aligned and coherent assessment, pedagogy, and curriculum  

95

all different for different “hammering.”] Likewise the tools you learn in English class—you may have to transform the tool or use it differently depending on the context and purpose for its use. This blog is an opportunity for you to reflect on and share how you have transformed and used those tools you’ve learned in your other classes to help you learn. (Graff, personal communication February 12, 2019)

Both clear and enticing, Graff ’s directions yield reflective blogs from his students and many variations in connections are made. In the first blog I read, a student describes how he uses rhetoric and knowledge of an audience in his speech class and that he was able to effectively prepare and give a speech to an audience with what he learned in class. I think that we can learn from these students’ connections. I am sure that there will be some that we had not anticipated. And there may be some obvious connections that students miss, but we see immediately. What is essential is that we attend to those connections and be certain that our learners are aware of them. We also need to note lack of connections and have conversations with our students and with each other about that lack. Notice that I usually encourage simple ways to work on those connections. At this point I want to move from alignment to coherence. I will describe some additional ways of thinking about the quality of coherence in student learning experiences. It will take this discussion into curriculum, specifically for more depth in coherence when we plan the content that we will teach.

Coherent Learning Experiences I intentionally did not use the words courses or programs but a larger generic— learning experiences. I begin by thinking of all the experiences that support student learning—within classrooms, outside of classrooms, in a place of employment, in the community, and in life. That’s probably overwhelming but it’s there, whether we control it or not. Working with the whole of learning experiences will force or encourage us to talk to each other more and more: •• Faculty in academic affairs and faculty in student affairs beginning to create a big learning experience picture •• Students chatting with student affairs faculty about activities planned for next semester

Driscoll et al_Advancing Assessment for Student Success.indb 95

03-06-2021 07:18:33 PM

96  

advancing assessment for student success

•• Faculty who teach general education courses collaborating with faculty teaching disciplinary majors to find connections •• Faculty expanding curriculum mapping to include experiences out of the classroom to reflect connections I know there are more possibilities, but I want to move us to some immediate ways to work on coherence in learning experiences. When I describe the student role in all of this, I will share some scenarios in which students build the connections that lead to cohesive understandings of their courses, their programs, their entire degree. First, I want to discuss some simple but essential ways to develop a curriculum that is coherent so students have something to build on.

Coherent Curriculum: How To’s Note that I switched to curriculum, which is another way of referring to the whole picture of learning experiences. Before I begin, I must remind us of one more reality. In chapter 1, I acknowledged that most of us have not been prepared to teach or assess when studying in our graduate programs. I think that is also true regarding preparation for curriculum development. My doctorate has the word curriculum in it, and in more than 25 years of consulting with campuses, I have only once been asked to guide faculty as they revised programs with the components of curriculum development. A very wise provost with a significant grant from Teagle Foundation ( Jennifer Summit, personal communication, September 23, 2016) approached the task of reviewing and redesigning curriculum for all programs at San Francisco State University (no kidding!). We spent 2 days at a time with the steps of curriculum development for groups of program representatives over the course of a year and a half. The groups were usually made up of varied majors—­ intentionally. The disciplinary backgrounds were not necessarily connected, but discussions were full of collaborative insights and new thinking about how to begin. I used the thinking of Ralph Tyler, which I will share soon, and faculty worked through those classic curriculum development ideas. After the initial work sessions at SFSU, the groups met as learning communities on a regular schedule for a year. They were powerful work sessions with a remarkable amount of peer teaching. After months of development work, I then met individually with each program to continue guiding their progress and plan next steps. In my assessment work, I keep bumping into faculty wondering about their curriculum, especially when assessment results weren’t what they were expecting. I’ve often heard, “We are going to redesign our curriculum,” and

Driscoll et al_Advancing Assessment for Student Success.indb 96

03-06-2021 07:18:33 PM

aligned and coherent assessment, pedagogy, and curriculum  

97

I worry. There is not much information to draw upon for most faculty other than the content. I am going to turn to my “curriculum guru” Ralph Tyler (2013) who began his work in the 1940s and is no longer with us, but whose ideas remain alive guiding curriculum development efforts. Tyler posed starting questions, then identified sources of curriculum, and, most importantly today, described organizational criteria for the curriculum or set of learning experiences. This will be like a Curriculum Development 101 course, achieved in a few pages for your individual use or for a group as I did at SFSU. Before you begin, pull from memory the last time you developed a new course or had the opportunity to develop a new program. What were your discussions like? What processes did you use? What topics or foci dominated your work? How and where did you start? See if you intuitively used some of Tyler’s starting points.

Starting Questions for Curriculum Development The questions that follow are the four fundamental questions that Tyler (2013) insisted that we answer as we begin curriculum development: What educational purpose should the program or institution seek to attain? What educational experiences can be provided that are likely to attain those purposes? How can those experiences be effectively organized? How can we determine that those purposes are being attained? (p. 1)

Remember that these questions were first written in 1940s, so the language feels more formal than our current description of goals and learning outcomes, pedagogy, curriculum, and assessment. You can revise the questions, but the answers are critical to both curriculum development and assessment. In my experience with faculty, they respond to those questions rather easily, but after reflection and continued discussion, come up with more depth and breadth in their responses. Those responses can become a framework for curriculum initiation. From there, Tyler identified prominent sources of curriculum. They are not inclusive but they spark different thinking about curriculum content.

Sources of Curriculum If I asked you for the sources of your curriculum, you would probably refer to a professional disciplinary association or a well-respected program at another university which could provide direction, content, and even goals and outcomes for your work. In chapter 3, Swarup Wood guided you on how to use

Driscoll et al_Advancing Assessment for Student Success.indb 97

03-06-2021 07:18:33 PM

98  

advancing assessment for student success

those sources for developing outcomes. This is also a time when we would typically review the mission of the institution, and of our programs. Tyler, however, began with the learners as sources of curriculum. Ask yourself: What do we know about our students? What kind of life experiences do they bring to our program? What kind of expectations do they have when they arrive? What learning strengths do they have? What obstacles do they face? What needs do they have? Physical needs? Social needs? Emotional needs? Learning needs? What are their interests?

You might want to return to chapter 2 to review ideas on getting to know your students. I’ve heard fairly generic answers to this set of questions, but when faculty respond to them in the context of their disciplines, the answers become more specific and helpful. Try posing the questions about your learners when you discuss program revision with your colleagues or when you decide to revise your course. Tyler’s (2013) next source is a scan of contemporary life. I believe this is where we would come up with such universal goals as critical thinking, communication, collaboration, and ethical reasoning. Here is where a particular community will influence the sources, or a specialized institution will add specific goals. While working with varied program chairs and faculty and mulling this source, I heard “risk taking,” “thinking outside of the box,” and “community building” at San Francisco State University. Tyler called “subject specialists” the next source and that’s clearly the ­disciplinary associations that we typically use as sources when designing curriculum. I won’t discuss that source because it’s a kind of standard practice and it involves many disciplinary experts. From there, Tyler urged us to consult with the psychology of learning as a source. Without reminding us of basics such as that when students use new learning in everyday life, they are less likely to forget that new learning, I urge us to stay current with the science of learning. Recently, Terry Rhodes (2017) posed a set of “primary suppositions” (p. l04) as foundational to the VALUE rubrics. Some of those suppositions include the kind of wisdom Tyler was suggesting: Learning is enhanced when faculty articulate for themselves and their ­students their expectations for learning on all the shared outcomes, such as with rubrics and performance indicators. Learning is not something that occurs once and is done but rather is an iterative and progressively more complex process that occurs over time and in multiple instances.

Driscoll et al_Advancing Assessment for Student Success.indb 98

03-06-2021 07:18:33 PM

aligned and coherent assessment, pedagogy, and curriculum  

99

The best demonstration of student learning is most likely to occur in response to assignments from faculty in the student’s formal curriculum and cocurriculum. (p. 104)

I also want to direct you to the work of Diane Halpern and Milt Hakel (2003) for a quick review of more basics of learning. Many of their ideas will affirm and expand your strategies to connect with what we know about learning. A discussion of what we know about learning is often a powerful faculty development focus for rich discussions. Our ideas about learning will support the next focus for curriculum from Tyler—that is, criteria for organizing curriculum.

Criteria for Effectively Organizing Curriculum For me, these criteria make such good sense and are easily understood as guides for our curriculum development work. Tyler (2013) recommended continuity, sequence, and integration. Let me define them first and then we will look at examples. Continuity refers to the “vertical reiteration of major curricular elements” (p. 84), or simply put, the repetition of specific learning experiences or skill practices or conceptual discussions. This can be effective within one course or across courses in a program. Students themselves appreciate repeated practice of complex skills, or ongoing discussion of some of the “big ideas” of a course. My biggest worry about this criterion is the pace of many of our classes or programs that don’t allow for or encourage much repetition. The second criterion, sequence, “emphasizes the importance of having each successive experience build on the preceding one” (p. 85)—or scaffolding in our current pedagogical language. We often use scaffolding to increase the depth and breadth of ideas or to increase and expand skills. We provide examples in the next section. Finally, integration refers to “the horizontal relationship of learning experiences” (p. 85). It is about making connections for learners—making connections between courses in a program or topics within a course or general education with disciplinary knowledge. Those connections can be encouraged by those simple questions we recommended previously, with the Venn diagrams or Graff ’s “transfer blog.” With Tyler’s organizational criteria in mind, I want to expand our thinking about scaffolding or sequencing.

Scaffolding/Sequencing of Curriculum We know that students learn better when their learning is scaffolded—that is, built from the simplest to more elaborate content. And I expect that most faculty do such structuring within their courses. What may not happen is an

Driscoll et al_Advancing Assessment for Student Success.indb 99

03-06-2021 07:18:33 PM

100  

advancing assessment for student success

explanation to learners that the structure is supporting their learning. Or the scaffolding may not be visible without a deliberate description: This week we will work on . . . It’s the first step in the process of . . . Once you are clear that you understand this step, then we will move to a more complex process and you will have lots of practice. Notice how this kind of sequence helps you learn very complex or brand new content.

I recently heard of another example of scaffolding within a course assignment. Ben Lazier at Reed College assigns two or three students in each of his classes to write a response to required course readings. All students are expected to do the readings and then to read those responses from their two or three peers. The peers who wrote the responses then lead the class discussion about the reading, using what they wrote. Those responses are intended to guide or frame the class discussion. Ben scaffolds the sequence of the class discussion starting with a concrete simple response to a more abstract complex response so that the students work through discussion in a scaffolded process. By the way, midway through the course, students decide the order and design the sequence to scaffold the class process themselves (Lazier, ­personal communication, November 11, 2019). Within courses, it’s not so difficult to scaffold, but between courses within a program, it’s often blocked by scheduling difficulties or insufficient course sections. Students move through many programs choosing courses in their major for reasons other than the sequencing of content. I understand those difficulties of scheduling and encourage program ­faculty to determine where scaffolding is essential and where courses don’t need sequencing. That information would be useful in a curriculum map. From there, faculty who teach the scaffolded courses need to be in frequent communication with each other. When faculty build prerequisites into their programs, either for entry to the program or within the program, it is essential to review the connections and scaffolding potential for such arrangements. Many undergraduate and graduate programs have a research sequence that is crucial to the success of their students, but many have a sequence that is not effective. In cases with an important prerequisite series, faculty have described the importance of their collaboration for curriculum development and for assessment. That collaboration and ongoing discussion must be maintained once the program is implemented. A student once complained, “It doesn’t seem like you professors talk to each other,” when confronted with the same assignment repeated in multiple courses. It was not intentional. Faculty had not discussed their courses or assignments. It was an embarrassing moment.

Driscoll et al_Advancing Assessment for Student Success.indb 100

03-06-2021 07:18:33 PM

aligned and coherent assessment, pedagogy, and curriculum  

101

That same wise provost at San Francisco State University, Jennifer Summit, recently brought me to her campus to facilitate a discussion between faculty who taught the prerequisite science courses and faculty who taught in the nursing and kinesiology programs that required those prerequisites. It was clear that the curriculum had been in place for a long time and that the two groups of faculty members had not engaged in such discussions. The session began with the science faculty sharing their syllabi and course outcomes and describing their pedagogy. Then the nursing and kinesiology faculty described their course outcomes and what expectations they had of their entering students. Once each group had finished their presentations, they began problem-solving, describing their challenges and successes, and posing multiple questions. The discussions were energetic, engaging, and satisfying. Most faculty were grateful for the conversations and recommended and requested more regular meetings. That same conversation could take place with faculty teaching general education courses and faculty teaching in a major program of studies. I probably need to say should instead of could because the lack of connections within general education programs and between general education and majors can keep those programs from being an effective part of a student’s degree. The intent of such conversations would be finding ways that students can use the knowledge and skills learned in general education when they pursue their major courses of study. Those assignments we described previously for connections or transfer would be useful here. Making those connections visible and important will add support for student success. Our assessments can also contribute to those connections. One assessment practice is especially effective as a scaffolded process and one of ongoing integration: ePortfolios. For example, ePortfolios focused on a major disciplinary program are an excellent way to promote the scaffolding of program learning and, at the same time, to check that students are finding the connections between their learning experiences. Directions for the ePortfolios can provide a structure that promotes the scaffolding of knowledge and skills and the connections that we hope students develop. As declared by Kuh (2017), ePortfolios meet all the requirements of a “high impact practice” and are declared by users of ePortfolios, to be a catalyst for student, faculty, and institutional learning (Eynon & Gambino, 2017). I want to spend some time discussing them and describe how they support multiple groups. For students, ePortfolios provide a process for demonstrating and reflecting on their learning. At LaGuardia Community College (LGCC), ePortfolios “help students connect their past, their future, their challenges and their growth, their learning, and their lives.” They provide a “place for planning, for collaborating, and for sharing” their learning as they “develop more

Driscoll et al_Advancing Assessment for Student Success.indb 101

03-06-2021 07:18:33 PM

102  

advancing assessment for student success

purposeful identities as learners” (Eynon & Gambino, 2017, p. 11). For me, they provide real agency for students in their assessment with lots of choices and a level of engagement that must characterize future assessment in higher education. For faculty at LGCC, ePortfolios provide a “deeper understanding of their students and a view of how their classes connect with each other” (p. 12). They provide an opportunity for faculty and staff to provide feedback and guidance that is focused and informed, meaningful, deep, and more individualized. In their comment to students, faculty can affirm the connections in student learning or, when needed, highlight them for students to note. For degree programs and other formats of learning, ePortfolios provide powerful evidence of their impact, or of the need to improve or revise. The collaboration required for effective use of ePortfolios supports a “more integrated and adaptive learning organization” (p. 12) for departments and institutions as a whole. I realize that I have not provided advice or guidelines for how to use ePortfolios. Others have devoted entire books to that end and I recommend them. Bret Eynon and Laura M. Gambino (2017) produced a powerful book that describes the “big picture” of using ePortfolios as pedagogy and assessment. It is High-Impact ePortfolio Practice: A Catalyst for Student, Faculty, and Institutional Learning, published by Stylus in association with the AAC&U. I return to it often when I need examples or explanations and even for motivation. Another publication that is practical and very usable for getting started is Documenting Learning With ePortfolios: A Guide for College by Light et al. (2012). And if you get involved with ePortfolio practice, there is even an International Journal of ePortfolios. A simpler approach than ePortfolios and more appropriate for you as an individual faculty is the use of a common assignment that becomes increasingly complex as learners move through a program. This, of course, requires faculty collaboration. The payoff is that students are very aware of their own growth within the program outcomes and the connections with the pedagogy and curriculum they are experiencing. At some point, students can take a more active role by prescribing next steps or describing additions to the assignment. That is a perfect example of what faculty mean when they say that they and their students “cocreate assessment.” Back to the national level—many of the learning experiences referred to as “high impact practices” rely on connections for their impact. You may be engaged in one of those “practices” or engaged in a committee to design them. One of them, service-learning, relies on the connections between course content on campus and insights and lessons in the c­ ommunity. If those experiences are facilitated well, students make those connections in reflection sessions, journals, and presentations. Another such practice, capstones, are a significant opportunity to connect the learning experiences throughout a

Driscoll et al_Advancing Assessment for Student Success.indb 102

03-06-2021 07:18:34 PM

aligned and coherent assessment, pedagogy, and curriculum  

103

major program. This practice also has the potential to connect learning associated with program outcomes with that of institutional outcomes such as critical thinking or information literacy or communication. I have also heard of general education capstones—an effort to integrate a range of coursework and help students find the connections between courses. To achieve such connections, faculty and students must have shared understandings of the potential of the capstone and its requirements. You want to hear students able to describe the rationale for requirements and affirm that what they are doing for the capstone demonstrates specific learning. Capstone work typically includes a presentation, and that is an opportunity for students to discuss connections. If you are involved in designing or teaching a capstone course, be sure to spotlight the connections within a program or between general education courses. With much of the learning in high-impact practices, there is great potential to weave in the learning associated with student affairs programs. Their programs focused on leadership work especially well with service-learning, and many capstones are focused on community issues so learning can embrace social justice, diversity, and ethical reasoning. The collaboration skills developed in student affairs programs will transfer well into the practices. Another approach which I witnessed within a course at Pratt Institute could be transferred into a program context. Pratt faculty Amir Parsa (2015) was teaching a new course on Contemporary Museum Education and described his intent for the course to be “an arena where students gain knowledge and skills, but a space where we can work together to observe, note, discuss practices, to theorize, to continue to interrogate what might constitute best practices in museum education.” The main project assignment of the course was a gallery learning experience (GaLE). He designed the course in such a way that students collaboratively built a rubric for their projects and added detail each time they had a course experience. For example, after visiting a museum, they added some of their insights from the visit to the rubric. After completing a literature review, they returned to the emerging rubric and added items. It is true that they had a powerful and slightly overwhelming rubric at the end of the course, but they also learned so much about museums and their educational functions.

Listening to Students When the course was completed, the students commented on their experience and especially the impact of the rubric development: “Knowing and engaging with the rubric beforehand gave me a clear idea of what was expected of me in my final project.”

Driscoll et al_Advancing Assessment for Student Success.indb 103

03-06-2021 07:18:34 PM

104  

advancing assessment for student success

“Evaluating, commenting, and making additions to the rubric gave us a clear set of criteria.” “I liked having input in the rubric. It was a new learning experience for me.” In sum, we can provide connections and scaffolding or sequencing and repetitions of important understandings and skills with our pedagogy and our assessments. Providing explanations to students about why we are doing specific teaching approaches or why a particular assessment will support their learning and success makes our practices understandable to them. Basically, I am talking here about making what we do transparent—a quality that helps meet the criteria of curriculum organization and connections. Transparency ultimately promotes student confidence and learning.

Transparency in Our Practices A very long time ago, I learned about a teaching strategy called “think aloud”—a strategy in which we reveal to students exactly what we are doing and why. In my teaching days, I often paused after introducing a new concept or idea and said to my learners, “This is a very important idea and I will be repeating it again and again. I want to be sure that you remember and understand it so that you can use it for your future work.” A common teaching technique that I use when working with faculty development coordinators and assessment directors is to ask them to roleplay faculty resistance to assessment. They can usually role-play frustration, annoyance, and complaints, as well as the reasons for their resistance quite well. Their reasons are extensive and, at some point, I stop and think aloud with “I asked you to role-play those resistances so that you are ready to hear them when faculty come to your workshops.” Or I note, “Some of you seemed to try to talk them out of their feelings and resistance. Instead, just affirm their reasons and let them know you heard them.” From there, they get into pairs so that they can practice responding to the resistances. And again, I describe for them why we are doing that. I am making my practices transparent. And frankly, if I can’t make them transparent with credible reasons, I should not be doing them. Another form of transparency is when we put our learning outcomes in our syllabi and align class sessions, readings, assignments, and so on with learning outcomes in the margin so that students know why we are assigning

Driscoll et al_Advancing Assessment for Student Success.indb 104

03-06-2021 07:18:34 PM

aligned and coherent assessment, pedagogy, and curriculum  

105

a particular reading or why we require a specific project. I have heard students express appreciation for such practices. Speaking of transparency, I’ve been so encouraged by the work of a program called Transparency in Learning and Teaching (TILT) and its pedagogical and assessment implications. Researcher Mary-Anne Winkelmes and her colleagues (2016) were searching for and came up with a simple approach to student assignments that would make a difference in student success. Transparency was a key to the approach and both students and teachers gained from the experience. Graff will describe TILT in depth in chapter 5 when he talks about assignment prompts and rubrics. That’s the end of Curriculum Development 101, integrated with both pedagogy and assessment approaches and examples. At this point, we have taken our attention beyond alignment to the coherence and transparency we want in our curriculum and pedagogy. You may be sitting there thinking, “Wait, this is an assessment book. Why so much attention to curriculum?” And sure enough, Swarup posed the same concern when he reviewed the chapter draft. I begin a response with my thinking that curriculum and assessment are not separate. For me, all of the guidance I provided about curriculum is so easily transferable to guidance for assessment and pedagogy. The questions from Tyler will lead us to more student-centered assessment. The criteria for organizing curriculum will improve and expand the potential for assessment as a learning experience. I will reiterate them here (practicing the first criteria): continuity or reiteration, sequencing or scaffolding, and integration (connections). Think about your students who encounter a major idea or concept a second time in their assessment—it provides a second chance for some of them to be successful. It is a communication about its importance and an affirmation of what they are learning. Some of the high impact practices provide illustration of both sequencing and integration. The beauty of capstones in a well-organized program is that they follow a series of learning and assessment experiences, hopefully scaffolded in the courses which precede them, that then provide an opportunity to integrate that learning into a project, problem-solving, or papers. Additional examples include ePortfolios and community-based learning or service-learning. Institutions that are intensely engaged with community-based learning have even designed pathways for the learning that is experienced, moving from simple “getting to know” community activities all the way to very sophisticated projects that integrate the experiential learning. It’s a scaffolding of very intense learning experiences that enhance student success.

Driscoll et al_Advancing Assessment for Student Success.indb 105

03-06-2021 07:18:34 PM

106  

advancing assessment for student success

We want the continuity that comes from reiteration in our assessment, for repeated practice, if nothing else. We want scaffolded assessment to develop strong skills and deep understanding. And we definitely want integrated assessments that promote students’ thinking about the connections within and between what they are learning. There is also the practical side of me defending the curriculum content of this chapter: “I don’t want to go to all the trouble of designing an effective assessment if I’m not even sure that the processes that precede my assessment are aligned or coherent or connected for students’ learning.” Before assessing, I want to know that my pedagogy and curriculum are going to provide effective learning experiences. I also want a framework of course or program information to return to when my assessment results are not what I expected or hoped for. The curriculum criteria give me places to investigate when I need to use student evidence to improve my teaching or curriculum.

Summary I end this chapter with a reminder of both alignment and coherence as critical criteria for our practices as we work with students to be successful. By thinking of those qualities in our curriculum and pedagogy and assessment, we acknowledge that we are working in a learning system. Focusing on assessment in that broad context will help us improve all components of the assessment cycle and truly ensure student success. Once again, I want to express gratitude to amazing colleagues who have contributed so much to this work on alignment and coherence. Pat Hutchings shares her pedagogical insights consistently in case studies, articles, and conference sessions. She restored my faith in alignment with her work. Peggy Maki has long been a source of ideas, practices, and motivation. Her books guided our early assessment days at CSUMB and continue to nudge our practices to be more student-centered. Finally, of course, Ralph Tyler will always be there to support our curricular development work with his guidance.

References Angelo, T., & Cross, P. (1993). Classroom assessment techniques: A handbook for college teachers. Jossey-Bass. Barkley, E. F., & Major, C. H. (2016). Learning assessment techniques: A handbook for college faculty. Jossey-Bass.

Driscoll et al_Advancing Assessment for Student Success.indb 106

03-06-2021 07:18:34 PM

aligned and coherent assessment, pedagogy, and curriculum  

107

Eynon, B., & Gambino, L. (2017). High-impact ePortolio practice: A catalyst for ­student, faculty, and institutional learning. Stylus. Hackett, C., Yokota, M., & Wahl, K. (2019, April 23). Curriculum maps: Ensuring intentionality and reflection in program planning efforts [Paper presentation]. ­Academic Resource Conference, Garden Grove, CA. Halpern, D. F., & Hakel, M. D. (2003). Applying the science of learning to the university and beyond: Teaching for long-term retention and transfer. Change, 35(4), 37–41. https:// doi.org/10.1080/00091380309604109 Hutchings, P. (2016). Aligning educational outcomes and practices (Occasional Paper No. 26). University of Illinois and Indiana University, National Institution of Learning Outcomes Assessment (NILOA). Jankowski, N., & Marshall, D. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Stylus. Kuh, G. (2017). Foreword: And now there are 11. In B. Eynon & L. Gambino, High-impact ePortfolio practice: Catalyst for learning (pp. vii–xi). Stylus. Light, T. P., Chen, H., & Ittleson, J. (2012). Documenting learning with ePortfolios: A guide for college. John Wiley & Sons. Maki, P. (2017). Real-time student assessment: Meeting the imperative for improved time to degree, closing the opportunity gap, and assuring student competencies for 21st-century needs. Stylus. Parsa, A. (2015, April 28). Contemporary Museum Education: Rubricizing Socrates. Presentation for the Assessment X Design Faculty Workshop, Pratt Institute. Power, M. (1994). The audit explosion. White Dove Press. Rhodes, T. (2017). Valid assessment of learning in undergraduate education. In B. Eynon & L. M. Gambino, High-impact ePortfolio practice: A catalyst for student, faculty, and institutional learning (p. 104). Stylus. Sumison, J., & Goodfellow, J. (2004). Identifying generic skills through curriculum mapping: A critical evaluation. Higher Education Research and Development, 23(3), 329–346. https://doi.org/10.1080/0729436042000235436 Tyler, R. W. (2013). Basic principles of curriculum and instruction. University of ­Chicago Press. Winkelmes, M. A., Bernascki, B., Butler, J., Zochowski, M., Golonics, J., & Weavil, K. H. (2016). A teaching intervention that increases college students’ success. Peer Review, 18(12), 31–36.

Driscoll et al_Advancing Assessment for Student Success.indb 107

03-06-2021 07:18:34 PM

5 U N D E R S TA N D I N G A N D S U P P O RT I N G AC H I E V E M E N T Improving Assignment Prompts and Rubrics Nelson Graff

A

s a writing teacher, I have (more often than I would like to admit) had the experience of reading a set of student essays and thinking, Wow, these are not what I expected. What I’ve discovered when I approached students with the difference between what I expected and what I saw was that, often, students were drawing perfectly logical (if surprising) conclusions about what they should do from the assignment prompts I’d written. I’ve learned from those experiences to be more careful, but I still occasionally get caught off guard by my own failures to imagine how students may respond to what I ask. I’m sure I’m not the only faculty member to have that experience, and a version of that experience informs this chapter.

Reflecting on Our Previous Assessment Experiences Dan, our coauthor and director of our office of teaching, learning, and assessment during the time I describe in this chapter, is fond of saying, “Assessments are only as good as the conversations they produce.” And the assessment collaborations he’s facilitated have evoked rich—and sometimes richly contentious—conversations. One topic that kept coming up when we looked at student work was how we faculty wrote our assignment prompts— those handouts that tell students what they’re supposed to do when they’re writing, or preparing a presentation, or researching, or doing math or science

108

Driscoll et al_Advancing Assessment for Student Success.indb 108

03-06-2021 07:18:34 PM

understanding and supporting achievement  

109

assignments. Adelman et al. (2014) observed that “assignments developed by faculty are the key both to students’ development of expected proficiencies and to the gathering of necessary evidence regarding meeting the proficiency standards of the degree” (p. 20). Because our institution-wide assessments have used class assignments, what we “discovered,” perhaps unsurprisingly, is that most students do what the prompt tells them to do rather than what we assume they should do. For instance, we may assume that it’s obvious that an essay based on library sources should include a reference list, but if it’s not written on the assignment, students may not include it. The importance of assignment prompts came up repeatedly in the reflective conversations we had when we got together to examine student work for institution-wide assessment of the five “intellectual skills”—­ critical thinking, information literacy, quantitative reasoning, written communication, and oral communication. Collectively we call those five our Undergraduate Learning Outcome (ULO) No. 1, intellectual skills, a learning outcome intended for all undergraduate students to achieve, regardless of major. When the first group—focused on information literacy and critical ­thinking—tried to use adaptations of the Association of American Colleges & Universities (AAC&U) VALUE rubrics to assess student products, they were a bit stumped. They realized that they couldn’t tell whether the papers in front of them failed to meet criteria because the students hadn’t achieved at the levels described in the rubrics or because the assignment prompts didn’t ask students to do the work required to meet those criteria. Then the quantitative reasoning group, the written communication group, and the oral communication group did our assessments and realized we had the same problem: We couldn’t tell whether we were not teaching students well enough or we just weren’t asking them to demonstrate what they could do. For instance, the description of the criterion “Position” on the critical thinking rubric suggests that students improve across the levels in their ability to identify assumptions. But few assignments explicitly ask students to identify assumptions at all. In this chapter, we describe for you our response to that situation. We review the wider interest in the connection between assignment design and student success. Then we describe what we know about writing assignment prompts that support students in producing their best work, illustrating our California State University, Monterey Bay (CSUMB) answers to the question of how to create good assignment prompts. Most of the discussion in this chapter focuses on the work of individual faculty—how we can think about and improve assignments and assignment prompts in our own classes. There are sections in the chapter that describe leading this work with other faculty, but even there, the thinking might help individuals apply what we’ve

Driscoll et al_Advancing Assessment for Student Success.indb 109

03-06-2021 07:18:35 PM

110  

advancing assessment for student success

learned. We close with the challenges of that work and our initial responses to those challenges.

Assessment and Assignment Prompts More Broadly We’re certainly not the only ones to notice the connection between the tasks students are assigned and students’ achievement. One of the headline conclusions from the VALUE project noted, “Early results point in several ways to the importance of the assignments in students’ abilities to demonstrate higher, second-order quality work. In short, what institutions ask their students to do makes a difference for the quality of the learning” (Mc Connell & Rhodes, 2017, p. 4). Similarly, the National Institute for Learning Outcomes Assessment (NILOA) has been focusing on the importance of quality assignments for some time. According to Hutchings et al. (2014), the publication of the Degree Qualifications Profile (DQP) “puts assignments, and the faculty work of creating them, at the center of student assessment” (p. 5). Swarup has described this work in more depth in his chapter 3 on outcomes. A NILOA survey of provosts also reports that lots of colleges and universities around the United States are focusing their efforts to improve instruction on assignment design (Jankowski et al., 2018). There’s even research (beyond our own assessments) to demonstrate that better assignments are connected with better student work (e.g., Hutchings et al., 2014; Mc Connell & Rhodes, 2017; Winkelmes et al., 2016). Winkelmes and colleagues (2016), for example, studied the Transparency in Learning and Teaching (TILT) framework for assignments, which we’ll describe in greater depth later, trying what they considered a small change to teaching practice. They redesigned just two take-home assignments to help students see more clearly the whys and hows of the assignments and found that just that small intervention improved outcomes. Such simple attention to assignments can have powerful results for individual faculty and their students. What’s more, research shows that with better assignments, students report more deep learning and engagement (Anderson et al., 2016). That research draws on questions added to the National Survey of Student Engagement (NSSE) and describes the elements of writing assignments that lead to such engagement. The insight that good assignments improve student performance offers promise for faculty and administrators, but it also raises questions. What makes an assignment good? How do we improve the quality of our assignments and assignment prompts? At CSUMB, we have developed materials and workshops to do just that, and we will share them with you, our readers.

Driscoll et al_Advancing Assessment for Student Success.indb 110

03-06-2021 07:18:35 PM

understanding and supporting achievement  

111

What Makes an Effective Assignment? We’ll ask you first to consider the difference between assignments and assignment prompts. One way to think of this distinction is that assignments are what we ask students to do. Assignment prompts are how we ask students to complete those tasks. For instance, it’s common in writing classes to assign a research-based essay. That’s the assignment. But we each describe those assignments differently on our handouts. Those are the assignment prompts. We have already described assignment prompts as “those handouts that tell students what they’re supposed to do when they’re writing, or preparing a presentation, or researching, or doing math or science.” If we are to design effective assignments, we need to consider both the assignments we are asking students to complete and the prompts that elicit student work. When Wiggins and McTighe (2005) described effective assignments, they were referring to the tasks that students do to demonstrate achievement. Ewell (2013) similarly described an assignment as “unavoidably elicit[ing] a demonstration of the competency [to be assessed]” (p. 13). Our goal in this book is to help you make connections between assessment and student success. For faculty in our individual classrooms, and for campus-wide assessments that draw on work students produce in classrooms, the assignments we ask students to produce are great places to make those connections. You could (and I often do) come up with a great, creative ­assignment—or so I think. Let’s consider first what makes an assignment great. But then, if the assignment prompt doesn’t communicate well, students don’t learn what I want from doing the assignment, and I don’t learn what I want from reading their work. So we’ll think about that next. The field of composition studies has focused a good deal of attention on assignments for writing. The large-scale study mentioned previously by Anderson and colleagues (2016) involved a collaboration between the Council of Writing Program Administrators (CWPA) and the NSSE. That study found three qualities of writing assignments that correlated to positive outcomes on the NSSE: interactive writing processes, meaning-making writing tasks, and clear writing expectations. When those qualities were present, students reported higher engagement in deep learning and improved personal and social development. As I describe these qualities in the following paragraphs, think about how they apply beyond writing assignments to whatever kinds of tasks you ask students to complete. What Anderson and colleagues mean by interactive writing processes is that there are deliberate opportunities for students to interact with someone (each other, the professor, a tutor) built in to the assignment. These opportunities might happen throughout the writing process—talking to generate

Driscoll et al_Advancing Assessment for Student Success.indb 111

03-06-2021 07:18:35 PM

112  

advancing assessment for student success

a topic and to develop ideas, receiving feedback in order to revise, or even visiting a tutoring center. And meaning-making tasks are those that require students to make new knowledge by critiquing, analyzing, applying, integrating—in other words, somehow transforming—information in their writing. Bean (2011) further explained meaning-constructing tasks by suggesting that such tasks give students “an authentic problem” and situate that problem within a rhetorical situation—giving “students a role or purpose, a targeted audience, and a genre” (p. 98). I’ll illustrate what an authentic problem looks like with two versions of a physics assignment from my rusty memory of first-semester mechanics. (Any physics instructors out there, please forgive me.) The first, very conventional assignment might ask students to calculate the distance traveled by a block of mass m traveling x velocity with y coefficient of friction and no positive force acting on it. A second, more authentic problem, might ask students to evaluate the 4-second guideline for following distance in a car, given certain details about the mass of the car and passengers, average friction of good-quality tires when braking, and the speed limit. Then it might ask them what conditions would lead them to use a different following distance while driving. In their book on community college writers, Tinberg and Nadeau (2010) also generated recommendations for effective assignments. They listed six qualities for good assignments, three of which apply to the assignment itself and its place in the class rather than to the assignment prompt, as shown in the following: 1. Develop incremental stages for complex writing tasks. 2. Allow for formative and substantive feedback. 3. Provide ample opportunities for drafting. (pp. 116–118)

You can easily see that these qualities could work for any assignment—­having students develop their responses in stages, allowing for feedback, and providing students opportunities to try out their ideas first. You can also see that all three of these qualities connect to “interactive writing processes” and are thus likely to lead to better student engagement and learning. Those descriptions and that list align well with some of the criteria that NILOA has published as part of their “Assignment Charrette Toolkit,” particularly feedback for improvement and scaffolding and integration. (The NILOA folks also include engagement and a focus on equity.) Hutchings et al. (2018) wrote, “An effective assignment is not an island. It is a part of a larger trajectory of connected assignments, courses, and experiences that

Driscoll et al_Advancing Assessment for Student Success.indb 112

03-06-2021 07:18:35 PM

understanding and supporting achievement  

113

prepare students to succeed” (p. 16). Such a description mirrors Wiggins and McTighe’s (2005) description of backwards design of instruction, that the assignment describes the evidence that students have learned— the knowledge and skills intended—and instruction is designed to ensure that students have the knowledge and skills they need to succeed on the assignment. The ideas of scaffolding and backwards design suggest one way of thinking about the place of assignments in the life of the classroom. Soliday (2011), however, suggested that preparing students effectively to complete writing assignments requires more than activities that develop the appropriate skills. She draws on a rhetorical understanding of genres as social actions or practices. When you think about genres in this way, you see that they develop because social situations repeat, and patterns develop for how we respond to those situations. And those patterns represent how we think and act in those situations. For instance, a lab report is not just a form; it embodies a kind of intellectual work and a series of interactions in a professional community (see Bazerman, 2000; Carter, 2007). Soliday (2011) suggested, therefore, “Because a prompt embodies a social practice, we would not give assignments as much as we would try to enact them in our classes” (p. 3, emphasis in original). Soliday claimed that embedding writing in the social contexts in which the communication makes sense encourages students to internalize the expectations of those assignments. Think about it—teachers who treat their classes as communities of scholars studying a topic together can more reasonably expect students to treat the scholarly essay as a communicative tool than those who read only literature and then write essays about the literature. Note that such a practice also connects back to Anderson et al.’s (2016) description of interactive writing processes. If we embed writing assignments into the communication practices of our classes, the process of giving and receiving feedback is a natural part of that social environment. It becomes less “a step in the process” and more “just how we do things in this environment.” A colleague had a related experience in her environmental communication course, in which multiple genres are read and applied, including policy. Her students were not reading (or were having difficulty reading) the statutes and policies that they needed to understand. Once we got into a discussion of why and how professionals in her field read such texts, she realized that she was taking them out of context. She was asking students to read the statutes and policies as they might read articles or textbooks, and they were struggling to engage with them and understand them. Once she started treating

Driscoll et al_Advancing Assessment for Student Success.indb 113

03-06-2021 07:18:35 PM

114  

advancing assessment for student success

those readings as part of the problem-solving process that led to proposals for new policies or environmental activism—the way they are used in the social contexts of her field—she had much better luck. Even in math classes, calculations and proofs serve purposes in particular social contexts. Calculations often represent the solutions to real-world problems, and proofs engage a scholarly conversation in the discipline. Enacting those practices in math classes can help students understand how such genres work. As this discussion suggests, an assignment, however good on its own, can only really be seen as effective in the context of teaching. And though Soliday focused on writing assignments, we can see all of the tasks that we ask students to do as opportunities to practice the “ways of doing” of our disciplines (Carter, 2007). And that means we have to pay attention to the contexts in which they engage those tasks. We have to think carefully about how we structure our classrooms and how that structure sets students up to approach our work together. If we want students to be active thinkers and doers, how much practice are we giving them in taking on those roles? So I hope that helps you think about effective assignments. I hope, too, it helps you connect with one of our book’s themes, that the best assessment is closely tied to curriculum and pedagogy. Let’s consider next the prompts that ask students to complete those assignments.

What Makes an Effective Assignment Prompt? Despite Soliday’s caveat, effective assignment prompts certainly matter for student success. According to Anderson et al. (2016), one of the qualities of an effective assignment is “clear writing expectations.” We can logically take that beyond assignments for written work to suggest that clear expectations would make most assignments more effective. And that’s where we get into a discussion not of the assignment but of the assignment prompt. One good place to look for an answer to the question of what makes good prompts comes from the TILT project (Winkelmes et al., 2019). After many years of focusing on transparency in teaching, Winkelmes started the TILT project in 2008 with a TILT Higher Ed survey exploring college students’ self-knowledge and confidence in their academic fields. TILT, by the way, is not just an acronym. It also represents a foundational idea for Winkelmes, that faculty can make a big difference in student outcomes with a small change (a tilt) in their practice. In this case, the tilt is in “offer[ing] students an honest look at the teacher’s rationale for what students are required to do” (p. 1). Drawing on those survey results and her work with faculty, Winkelmes developed the TILT framework for assignments, which suggests that effective

Driscoll et al_Advancing Assessment for Student Success.indb 114

03-06-2021 07:18:35 PM

understanding and supporting achievement  

115

assignment prompts provide students with clear information about the purpose, task, and criteria for success for the assignment. In researching the TILT framework, Winkelmes and colleagues discovered that when faculty changed assignments to make that information explicit, students’ attitudes about themselves as learners as well as their performance in classes improved (Winkelmes et al., 2016; Winkelmes et al., 2019). Other research in composition studies also highlights the need for transparency in assignments. For instance, the other three of Tinberg and Nadeau’s (2010) six qualities for good assignments pertain specifically to making assignment prompts more transparent: 1. Show students what success looks like. 2. Spell out criteria for success. 3. Suggest processes for succeeding. (pp. 116–118)

And Bean’s (2011) explanation that “meaning-constructing tasks” include “an authentic problem” situated within a rhetorical situation (p. 98) has implications for assignment prompts as well, requiring that instructors represent the task as a problem to be solved and include information about the audience and purpose of the writing. Bean’s suggestions are particularly significant because Melzer (2014) found that the purposes and audiences for writing assignments across the curriculum in college are often unclear or inauthentic. As Melzer wrote, “Most of the assignments with an informative purpose give students an extremely limited view of academic discourse, instead asking them to simply display the ‘right’ answer or the ‘correct’ definition to the instructor through a recall of facts” (p. 22). What is suggested by the assignments is that the professor is the audience, and the purpose for the assignments is evaluation of students. For instance, Melzer quoted an environmental science exam question, noting that the “instructor is looking for ‘phrases and sentences that show me what you know about a topic’” (p. 23). If we want students to learn the kinds of thinking implied by different tasks, we have to do better. This next section describes our efforts to answer how.

So How Do We Help Faculty Write Better Assignment Prompts? After several assessment projects reminded us that faculty needed help writing better assignments, the assessment coordinators, Dan, and I decided that we needed to create a workshop for that purpose. We scheduled the first assignment-design workshop for January of 2017 and started meeting to plan it. To no one’s surprise, that process was more complicated than we thought.

Driscoll et al_Advancing Assessment for Student Success.indb 115

03-06-2021 07:18:35 PM

116  

advancing assessment for student success

Once we started talking about assignments, we came to some interesting conclusions. We realized, first, that information literacy and critical thinking really couldn’t (or perhaps shouldn’t) be assessed on their own. That is, students found and used information to produce a written or spoken or mathematical text of some kind. Similarly, they applied critical thinking in the course of creating some kind of text. When we realized that, we decided to integrate the most relevant criteria from the information literacy and critical thinking rubrics into the others, creating three integrated rubrics: •• Written communication (with information literacy and critical thinking) •• Oral communication (with information literacy and critical thinking) •• Quantitative reasoning (with information literacy and critical thinking) And although that may sound straightforward, it was anything but. It required lots of meetings with groups of faculty in all of the areas, and the integrated rubrics were not finalized before the January 2017 workshop, but they were started. Once they were complete, we realized that the rubrics themselves were unwieldy. By themselves, they would not really guide faculty in writing better assignments. So, like faculty at Utah State University, we followed the lead of our librarians (Wishkoski et al., 2018). In response to their assessment of information literacy, the information literacy assessment team ­created an “assignment guide” that provided questions to help faculty consider how explicitly they were asking students to engage in the information literacy practices they recommend. Using their model, we developed “assignment guides” based on the integrated writing, oral communication, and quantitative reasoning rubrics. Those guides provided questions for faculty to consider as they revised or designed their assignments. The criteria on the rubrics became categories of questions, and the questions themselves were translations of the descriptions of excellent performance. All of the assignment guides are available online (https://digitalcommons.csumb.edu/ulos/). Take a look, for instance, at Figure 5.1, the row of the written communication integrated rubric called “Supporting Materials,” which is adapted from the AAC&U VALUE information literacy (IL) rubric, and Figure 5.2, which is our translation of that criterion on the assignment guide. As I hope you see, if faculty ask themselves such questions as they compose or revise their assignments, they become much more conscious of the

Driscoll et al_Advancing Assessment for Student Success.indb 116

03-06-2021 07:18:36 PM

Driscoll et al_Advancing Assessment for Student Success.indb 117

03-06-2021 07:18:36 PM

Chooses a variety of information sources appropriate to the scope and discipline of the task. Selects sources after considering the importance of multiple criteria, such as relevance to the topic, currency, authority, audience, and bias or point of view. 

Chooses a variety of information sources appropriate to the scope and discipline of the task. Selects sources using multiple criteria, such as relevance to the topic, currency, and authority. 

Chooses a variety of information sources. Selects sources using basic criteria, such as relevance to the topic and currency. Sources are mixed with regard to authority.

• What kind(s) of sources are called for explicitly in the prompt (e.g., peerreviewed literature only; are newspapers, magazines, blogs, and other forms of popular media acceptable; etc.)? • What guidance does the prompt offer in terms of quantity and diversity of sources? • How does the prompt engage students in establishing or questioning the credibility of cited experts and other evidence?

Supporting materials (IL)

Figure 5.2.  Supporting materials assignment guide.

Supporting materials (IL)

Figure 5.1.  Supporting materials rubric row. Chooses too few information sources. Selects sources using limited criteria, such as relevance to the topic. Authority of sources is questionable.

118  

advancing assessment for student success

knowledge and experience they are expecting students to draw upon to write. In an anonymous feedback form after one workshop, a faculty member wrote about the assignment guides: I will use this handout as a checklist when I am designing assignments. For example, my assignments sometimes lack “one central question” for students to respond to, and this obviously leads to confusion. I see now how articulating that question clearly (rather than having it only exist in my head!) can produce better student work.

We explain on the assignment guides that we don’t expect faculty to be explicit about every detail implied by the questions asked. Indeed, some questions should lead to changes in instruction rather than instructions. But we do encourage faculty to make thoughtful and intentional ­decisions about what they do and don’t make explicit on their assignments. In Figure 5.3, you’ll see the language that appears in some form on all of the assignment guides. Sometimes, the translation from rubric to assignment guide is not quite as direct as it is for “Supporting Materials.” For instance, Figure 5.4 is the first row of the written communication rubric, “Issue/problem,” which is adapted from the AAC&U VALUE critical thinking rubric description of “Explanation of the Issues.” We translated this on the assignment guide to the questions shown in Figure 5.5. In this case, the questions try to get at what is meant by the descriptors on the rubric. When we use these assignment guides in workshops, we are discovering that some of the questions may need revision, taking too much from the discipline of composition studies to be readily useful for faculty in other disciplines. Figure 5.3.  Using the assignment guides. This assignment guide is a tool educators can use to critically examine and improve their assignment guidelines for the purpose of helping students produce better work. There is no expectation that assignments explicitly address all questions posed in the guide. Rather, each guide poses questions to help educators make decisions about what kinds of prompts to include—or not to include—in their assignment guidelines.   As you review the assignment guidelines, respond to the questions below (generated from the written communication integrated rubric) and revise your assignment prompts if appropriate and helpful to students. Depending on course level and prerequisites, it may be reasonable to expect students to know how to respond without explicit prompting. Consider providing students with a work sample that illustrates excellent performance.

Driscoll et al_Advancing Assessment for Student Success.indb 118

03-06-2021 07:18:36 PM

Driscoll et al_Advancing Assessment for Student Success.indb 119

03-06-2021 07:18:36 PM

Clearly states, comprehensively describes, and fully clarifies the issue/ problem to be considered: defines key terms, explores ambiguities, determines boundaries. Delivers all relevant information needed for full understanding.

Clearly states, describes, and clarifies the issue/ problem to be considered: defines key terms, explores ambiguities, determines boundaries. Delivers relevant information necessary for understanding (understanding is not seriously impeded by omissions).

• Does the prompt define for students what is at issue, or should students define their own issue? If the latter, how explicitly does the prompt define for students the limitations on the appropriate range of issues? • How explicitly does the prompt define the urgency/need for response? • How explicitly and narrowly does the prompt ask students to define what is at issue in the task?  • What information does the prompt offer in terms of the audience’s background knowledge?

Issue/problem (CT)

Figure 5.5.  Issue/problem assignment guide.

Issue/problem (CT)

Figure 5.4.  Issue/problem rubric row. States and describes issue/ problem to be considered, leaving some terms undefined, ambiguities unexplored, boundaries undetermined, and/or backgrounds unknown. Delivers information needed for basic understanding (more information needed for full understanding).

States issue/problem to be considered generally. Delivers information needed for minimal understanding (more information needed for basic understanding).

120  

advancing assessment for student success

For me as a writing instructor, working to bring information literacy and critical thinking into a writing rubric helped me see aspects of teaching writing that had been somewhat tacit before. For instance, teaching writing rhetorically, I had encouraged students to begin their essays by establishing an exigence—what Bitzer (1968) described as the urgency that prompts a response. But the critical thinking rubric language—“Clearly states, comprehensively describes, and fully clarifies the issue/problem to be considered: defines key terms, explores ambiguities, determines boundaries. Delivers all relevant information needed for full understanding”—helps me explain more thoroughly to students what that looks like in an essay. And though I’ve long taught students how to find and select sources, the information literacy criteria of “Supporting Materials” and “Use of Support” help me think in a more well-defined way about the skills I need to teach students (and the requirements I must explain on my assignment prompts). The rubrics have also proved to be useful to faculty. Faculty get ideas from the criteria and descriptors on the rubrics that help them develop their own, assignment-specific, rubrics. But based on practice with the integrated rubrics, the assessment coordinators realized that faculty needed more information to use them effectively. We therefore developed what we called “rubric guides” based on the “Thinking Behind the Rubrics” documents created to improve norming for the Performance Assessment for California Teachers project out of Stanford University (https://scale.stanford.edu/teaching/pact). The rubric guides provide detailed explanations of each of the criteria on the rubric and provide examples to illustrate the difference between Levels 2 and 3, the line at which we determined proficiency for graduation. For faculty learning how to use the rubrics, the guides help them orient themselves. In scoring sessions, faculty often look back at the guides when they’re wrestling with whether a piece of work meets proficiency on a certain criterion. For example, see Figure 5.6 for “Issue/Problem.” Separately, one of the librarians at CSUMB began promoting the transparent assignment design framework from TILT. The three of us—the librarian, Dan, and I—adapted that framework to embody insights from composition studies and teaching for transfer that suggested that providing students an authentic (or semiauthentic) communicative context also supported students’ learning. Whereas the TILT framework includes purpose, task, and criteria, we added a fourth section, “Communicative Context,” which asks instructors to define the purpose and audience for students to communicate their learning. This hearkens back to Bean’s (2011) suggestion that assignments should give “students a role or purpose, a targeted audience, and a genre” (p. 98) and to Soliday’s (2011) recognition of the importance of context for the genres that students create.

Driscoll et al_Advancing Assessment for Student Success.indb 120

03-06-2021 07:18:36 PM

understanding and supporting achievement  

121

Figure 5.6.  Issue/problem rubric guide. Issue/Problem (CT) 1. Big ideas and their progression Definition of issue—Writers both respond to and create an urgency for the response in writing. They do so by defining a problem or situation and highlighting what is at issue about that situation. In the physical sciences, issues are widely agreed upon, and the definition of the issue may be abbreviated or elliptical. In the humanities and social sciences, a great deal of rhetorical work may go into defining a situation and establishing that some situation is problematic and requires resolution. In applied physical scientific research, considerable rhetorical work similar to that required in social sciences may be needed. Across the rubric, this idea progresses from a vague or general definition of the issue to a precise, narrowly bounded definition. Key terms—In defining an issue, writers must often negotiate ambiguities of terms used to describe the issue. Often, the terms have various meanings (for instance, in popular versus academic contexts or among disciplinary contexts or even within a single discipline). Levels of performance vary according to the proportion of ambiguous terms clarified and the quality of that clarification. Background information—In order for readers to understand both the situation that is problematic and what is problematic about that situation, writers must provide some background information. In physical sciences, often very little background information is needed, whereas in applied sciences and disciplines in the humanities, often a great deal of background information is needed to ensure clear communication of the issue. The levels vary in terms of this idea by the sufficiency of the information provided in order for readers to have a full and rich understanding of the issue addressed. 2. Level differences At level 2, a paper can score a 2 on this criterion by framing the issue too broadly or by addressing too many different possible focuses. Or a paper may score a 2 because the author may not define the issue and associated terms clearly for the reader or explain the background enough for the reader. A level 2 paper may simply expect the reader to fill in a lot of information regarding the issue, leaving the reader confused as to the problem or issue being presented. At level 3, the narrow and focused definition of the issue makes a paper a 3 on this criterion. Terms such as complexity or difficulty need to be unpacked both in terms of their meaning and the criteria used to determine them in order for a paper to score as proficient in light of this criterion. At level 3, a writer also contextualizes the issue, providing enough background information for readers to understand the issue and why it matters.

Such contexts will ideally help students imagine themselves truly publishing the texts (in the broadest possible sense) they compose. For instance, in Figure 5.7, I include the communicative context section of an assignment for my 1st-year writing class. Our new framework for assignment prompts, then, includes purpose, communicative context, task, and criteria for success.

Driscoll et al_Advancing Assessment for Student Success.indb 121

03-06-2021 07:18:36 PM

122  

advancing assessment for student success

Figure 5.7.  Communicative context for research-based essay. You will write this article as a piece for an undergraduate community of scholars. While you will focus on analyzing an issue in your local community—CSUMB— because undergraduate research journals like the ones you have been reading are often online, you should compose for multiple audiences—from your colleagues in this class to the wider CSUMB community to others interested in understanding issues that affect college undergraduates. A faculty member in HCOM, Dr. Medina-Lopez, is working with her students to launch an undergraduate writing research journal— Writing Waves. You should consider submitting your writing to future editions of the journal. Submissions will be due in early spring semester.

How Do We Help Faculty Use These Tools? In order to support faculty in using these newly designed assignment guides, and to take advantage of insights from the TILT projects, we developed two assignment-design workshops for faculty. Although faculty assignments can benefit from modifications from both of these frameworks, we found that introducing both at the same time made the workshops both unwieldy and overwhelming. In one workshop, faculty are introduced to the assignment guides, see examples of assignment prompts that make different choices with respect to the questions on the guides, and work in small groups to interrogate their assignments (or the assignments they are hoping to develop) using the questions on the guides. Assessment coordinators for all of the intellectual skills sit in on those groups to provide support. To market the workshops, we called the first one “Designing Assignments That Lead to Better Student Work.” In the other workshop, faculty learn about the transparent assignment design framework, see examples of assignments modified using the framework, and practice modifying their own assignments. Feedback about the effectiveness of both workshops has been overwhelmingly positive. Faculty appreciated seeing assignments before and after the application of the framework and working with one another to make their assignments more transparent. One faculty member, explaining that they would encourage others to use the transparent assignment design framework, wrote, “1—It forces you to think critically, pedagogically, about the point of the assignment. 2—It provides clear and explicit instructions to the students that make the assignment more relevant in their lives.” Exactly! Both workshops share some commonalities with NILOA’s (n.d.) assignment design charrettes, which guide faculty in a collaborative process of reviewing each other’s assignments. Some versions of the charrettes, in fact, use the TILT framework in their discussions. Our workshops, though, are

Driscoll et al_Advancing Assessment for Student Success.indb 122

03-06-2021 07:18:36 PM

understanding and supporting achievement  

123

more narrowly focused on assignment prompts as communicative tools, and the assignment guides we’ve developed give faculty a more fine-grained tool for analyzing the connections between their prompts and the skills faculty hope students will demonstrate in their responses to the assignments.

How Do We Bring Peer and Student Feedback to the Assignment Design Process? Bean (2011) suggested faculty peer review for assignment prompts, a­ sking other faculty for feedback on the handouts before giving them to students. And Hutchings et al. (2014) encouraged faculty to “to invite students to engage in ‘user testing’ of draft assignments” (p. 18). At CSUMB, we combine both recommendations with a service we learned about from the University of Washington Bothell, an “assignment review service.” I’ve teamed up with the Cooperative Learning Center (our multidisciplinary peer tutoring center) to offer faculty feedback on their assignment prompts. A faculty member sends the prompt to me, as director of communication across the disciplines (CAD). I anonymize the prompt and forward it to the writing tutor coordinator, who asks a tutor or two to review the prompt, responding to two questions: What do you know to do based on this assignment prompt? and What questions or confusions do you have about this assignment? At the same time, I review the prompt with the appropriate assignment guide and develop my own feedback. Once the tutor provides feedback, I synthesize those two sources of feedback and send it back to the instructor. The tutor comments are especially helpful to faculty. As students themselves, tutors notice things that might be confusing to other students that I might gloss over, asking questions about the real purpose of the assignment, section headings in the paper, how sources are to be cited, and how closely a student essay should match a provided model essay. Yet these tutors are also more carefully reading assignments than the students in classrooms may read them. For a faculty member, seeing that a tutor had to look carefully at the rubric in order to figure out what to write is itself helpful information. In casual conversation, the writing tutor coordinator has mentioned that the tutors really enjoy looking at faculty assignments and providing feedback. Because students who come in for tutoring usually bring assignments with them and because they are students themselves, the tutors can usually zero in pretty quickly on elements that are likely to confuse the readers of the assignments. For instance, tutors sometimes point to terms or phrases that instructors use in their assignments that may be less clear than instructors

Driscoll et al_Advancing Assessment for Student Success.indb 123

03-06-2021 07:18:36 PM

124  

advancing assessment for student success

think, such as critical or appropriate paragraphing. Sometimes, tutors ask the same questions I do based on the assignment guides (tutors don’t use the assignment guides for their feedback).

What Do Other Institutions Do? As I noted, I got the idea for the assignment review service from the Writing and Communication Center at the University of Washington Bothell (https://www.uwb.edu/wacc/what-we-do/assignment-review), though their approach is solely to ask tutors (whom they call peer consultants) to respond to the assignments. But we are hardly the only two institutions trying to help faculty improve their writing assignments. Other campuses in the California State University system have been excited by the assignment design work we have done with faculty and have adapted our work to their own contexts. San José State University, for instance, invited us to present to their faculty about the integrated rubrics and assignment guides. And with the support of Julie Stein (ALA), the educational effectiveness project manager at California State University at East Bay (CSUEB), a multidisciplinary team of faculty developed their own assignment guides based on ours (CSUEB, n.d.). Our two institutions presented together about that work at the Academic Resources Conference of our accrediting agency (WSCUC ARC). In Figure 5.8, for instance, is CSUEB’s Figure 5.8.  CSUEB assignment guide—Statement of purpose, thesis, or controlling idea(s). Descriptions & Examples for Each Rubric Category Category 1: Statement of purpose, thesis, or controlling idea(s). • How explicitly does the prompt ask students to define what the central idea is? • What information does the prompt offer in terms of the purpose for which students are writing/composing? Example 1: Take a position on the issue of globalization in response to the following questions: Would you agree that corporations such as Starbucks, Walmart, and McDonald’s have had a generally positive or negative impact on global culture? In your opinion, when these corporations build their facilities overseas, do they generally respect the local cultures in which they operate or not? Example 2: Describe your topic, and write up your paper thesis (In this paper I am . . . and I will . . .). What particular dimension of medical anthropology are you studying? (e.g., norms, values, ideals, myths, popular misconceptions, social problems, etc.). Tell me why you selected this paper topic and why it is important and relevant to the topic. What is the clear and specific question you plan to answer in this paper?

Driscoll et al_Advancing Assessment for Student Success.indb 124

03-06-2021 07:18:37 PM

understanding and supporting achievement  

125

adaptation of an assignment guide for the rubric description we previously examined: “Issue/Problem.” You can see that the CSUEB folks adapted some of the questions that we include on our assignment guide, but they’ve also added examples to illustrate what that category looks like in practice. Other institutions focus more on professional development than resources. Significantly, Condon et al. (2016) described evidence that professional development with faculty can lead to improved student achievement. Some of the professional development activities they studied at Carleton College and Washington State University focused on assignment design, particularly writing across the curriculum (WAC) workshops that led faculty to think carefully about how they write their assignment prompts, how they can use miniassignments in their classes, and how they scaffold the assignments. And NILOA has published guidelines on their website for “assignment charettes”—their term for “collaborative peer review” of assignments. They also list several institutions that have adapted the process to improve faculty work.

What’s Tough About This Work? Our work shared one of the largest challenges for any professional development programs: reaching faculty. When we first offered an assignment design workshop, over 20 faculty attended. But over time, attendance has dwindled to the same faculty who attend all of the professional learning opportunities offered. Although it’s exciting to see familiar faces, that limits the impact of our work to those who notice and welcome such general calls. We are working to convince departments to use department meetings for professional learning opportunities, bringing these workshops to faculty where they “live” and perhaps tailoring them to disciplinary groups. One strategy we’ve adopted that may help to address that challenge comes from the University of Minnesota. Pamela Flash (2016) developed a writing-enriched curriculum program (WEC) that involves majors investigating writing (defined broadly) in their disciplines, setting goals for student achievement in their majors, and developing plans for reaching those goals. At CSUMB, we’re beginning a reading/writing-enriched curriculum project. Part of the plans that departments will develop will include descriptions of the support they need to improve their reading/writing instruction. Among the resources we will offer departments are the assignment guides, rubrics, and rubric guides we have developed and workshops on assignment design. As departments take on these resources, they will move into the daily work of the faculty.

Driscoll et al_Advancing Assessment for Student Success.indb 125

03-06-2021 07:18:37 PM

126  

advancing assessment for student success

Another strategy we’ve started to adopt and hope to expand in the future is the use of digital resources. In the spring of 2016, we responded to a need that faculty had identified for help in teaching students analysis skills by creating a series of workshops. We video recorded those workshops and edited them, making them available on our university’s digital commons (https:// digitalcommons.csumb.edu/ta_workshops/). And both communication across the disciplines (https://csumb.edu/cad/) and teaching learning and assessment (https://csumb.edu/tla) include resources for faculty on our websites. Such digital resources allow faculty to access them on their own time, so we hope to create more such resources in the future. In the meantime, we experimented with using Zoom to conduct an assignment design workshop, and that went well. We also found it challenging to keep the language across the rubrics, assignment guides, and rubric guides consistent. The VALUE rubrics from which our rubrics were developed overlap in sometimes inconsistent ways. Because we were combining rubrics, those inconsistencies sometimes caused us some vexation. For instance, when we were working to include a criterion on the writing rubric that related to evidence, we looked to the written communication, critical thinking, and information literacy VALUE rubrics. Figure 5.9 shows what we saw as we compared. The first thing that probably jumps out at you about that matrix is that there are relevant criteria from information literacy. The obvious answer then is to use those two criteria because they clearly elaborate the components of using information in writing. But then, whereas the information literacy criterion about the use of information suggests “purpose,” the critical thinking criterion describes “a comprehensive analysis or synthesis.” And whereas the information literacy criterion about selecting evidence focuses on criteria for selection of sources, the critical thinking criterion describes questioning the viewpoints of experts. We ended up using the information literacy criteria almost unchanged, but it was a long discussion. Although such discussions take time and energy, they help us develop a shared understanding that supports collaboration and our ongoing work. What’s more, the process of developing the integrated rubrics and combining such language happened in specialized groups—one focusing on writing, one focusing on oral communication, and one focusing on quantitative reasoning, with consultants for critical thinking and information ­literacy participating in all three groups. As a result, there were differences in language and framing on the documents—rubrics, assignment guides, and rubric guides—that had to be resolved later by the assessment coordinators; the director of teaching, learning, and assessment; and me. Moving forward, we also know we need to collect more data on the impact of our work. We know how many faculty attended our workshops.

Driscoll et al_Advancing Assessment for Student Success.indb 126

03-06-2021 07:18:37 PM

Driscoll et al_Advancing Assessment for Student Success.indb 127

03-06-2021 07:18:37 PM

Evidence

Evaluate Information and Its Sources Critically

Use Information Effectively to Accomplish a Specific Purpose

Critical Thinking

Information Literacy 

Information Literacy 

Note. Adapted from Rhodes (2010).

Sources and Evidence

Written Communication

Criterion/Criteria

Communicates, organizes, and synthesizes information from sources to fully achieve a specific purpose, with clarity and depth.

Chooses a variety of information sources appropriate to the scope and discipline of the research question. Selects sources after considering the importance (to the researched topic) of the multiple criteria used (such as relevance to the research question, currency, authority, audience, and bias or point of view).

Information is taken from source(s) with enough interpretation/evaluation to develop a comprehensive analysis or synthesis. Viewpoints of experts are questioned thoroughly.

Demonstrates skillful use of high-quality, credible, relevant sources to develop ideas that are appropriate for the discipline and genre of the writing.

Description for Capstone Level

Figure 5.9.  Criteria related to evidence from the written communication, critical thinking, and information literacy VALUE rubrics.

128  

advancing assessment for student success

We know, too, how faculty have experienced those workshops from their feedback on reflections. Yet we don’t know whether they truly revised the assignments they brought with them, how effective they found the revisions, and whether they have applied that learning to other assignments (or encouraged others to do so). We have not yet made the connection between the faculty development work we have done and student achievement described by Condon et al. (2016). So that’s an opportunity we have to gather some data. Finally, despite the involvement of tutors in assignment review, we did little to involve students in the work of developing the integrated rubrics, assignment guides, and rubric guides. We now realize that was a mistake, and we hope to involve students more in future efforts. We have begun to bring peer tutors into our assignment design workshops. Just as their perspectives on assignment prompts provide helpful feedback for faculty, their in-the-moment sharing of the student view of assignment prompts helps to underline key ideas of the workshops. Future efforts might involve engaging those tutors in revising the assignment guides or rubrics to embed those ideas into the tools that faculty use.

What Have We Learned? Despite those challenges, we’ve learned a lot from this work about assignments and assignment prompts, about assessment, and about collaboration. A few highlights follow: •• Our understanding of student achievement is limited both by what we ask students to do and how we ask them to do it. •• In order to have productive conversations about assignment design, it is worth distinguishing assignments (what we ask students to do) from assignment prompts (how we ask them to do it). •• Discussing assignment design can help faculty make connections among assignments, the place of those assignments in our classroom ecologies, and teaching. •• Because the assignment prompts we create are acts of communication, they benefit from the same processes as other acts of communication— reflection, collaboration, and revision. •• Similarly, professional learning activities are themselves acts of communication that benefit from reflection, collaboration, and revision. •• Our understanding of domains of skill such as critical thinking, information literacy, writing, oral communication, and quantitative reasoning benefit from cross-talk.

Driscoll et al_Advancing Assessment for Student Success.indb 128

03-06-2021 07:18:37 PM

understanding and supporting achievement  

129

All of the insights that led to the development of the tools and workshops we describe in this chapter were insights that came up in conversation. Bringing faculty from across the campus together to look carefully at student work helped us to see how assignments across the campus could improve. And collaborating across the different intellectual skills has been revealing in many ways. I can honestly write that none of this work could have happened through the work of any single person (or even any small group of people). It took dozens of people, each with their own perspectives and background, to talk through what we were seeing and how to address it (see Dan’s chapter 7 on reflection that describes the infrastructure of assessment at CSUMB). And that may be the greatest lesson from our experience.

References Adelman, C., Ewell, P., Gaston, P., & Schneider, C. G. (2014). The Degree Qualifications Profile. Lumina Foundation. www.DegreeProfile.org Anderson, P., Anson, C., Gonyea, B., & Paine, C. (2016). How to create high-impact writing assignments that enhance learning and development and reinvigorate WAC/WID programs: What almost 72,000 undergraduates taught us. Across the Disciplines, 13. https://wac.colostate.edu/docs/atd/hip/andersonetal2016.pdf Bazerman, C. (2000). Shaping written knowledge: The genre and activity of the experimental article in science. WAC Clearinghouse. https://wac.colostate.edu/books/ landmarks/bazerman-shaping/ Bean, J. C. (2011). Engaging ideas: The professor’s guide to integrating writing, critical thinking, and active learning in the classroom (2nd ed.). Jossey-Bass. Bitzer, L. F. (1968). The rhetorical situation. Philosophy and Rhetoric, 1(1), 1–14. https://www.jstor.org/stable/40237697 California State University, East Bay (CSUEB). (n.d.). Assessment resources: Faculty resources for 2019–20 Critical Thinking and Quantitative Reasoning Assessment. ­Academic Programs and Services, Office of Educational Effectiveness, ­California State University, East Bay. https://www.csueastbay.edu/aps/assessment/­assessmentresources.html Carter, M. (2007). Ways of knowing, doing, and writing in the disciplines. C ­ ollege Composition and Communication, 58(3), 385–418. https://www.jstor.org/­stable/ 20456952 Condon, W., Huber, M., & Haswell, R. (2016). Faculty development and student learning: Assessing the connections. Indiana University Press. Ewell, P. (2013). The Lumina Degree Qualifications Profile (DQP): Implications for assessment (Occasional Paper No. 16). Lumina Foundation. learningoutcomes­ assessment.org Flash, P. (2016). From apprised to revised: Faculty in the disciplines change what they never knew they knew. In K. Yancey (Ed.), A rhetoric of reflection (pp. 228–249). Utah State University Press.

Driscoll et al_Advancing Assessment for Student Success.indb 129

03-06-2021 07:18:37 PM

130  

advancing assessment for student success

Hutchings, P., Jankowski, N. A., & Baker, G. (2018). Fertile ground: The movement to build more effective assignments. Change: The Magazine of Higher Learning, 50(6), 13–19. https://doi.org.10.1080/00091383.2018.1540816 Hutchings, P., Jankowski, N. A., & Ewell, P. T. (2014). Catalyzing assignment design activity on your campus: Lessons from NILOA’s assignment library initiative. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018, January). Assessment that matters: Trending toward practices that document authentic student learning. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Mc Connell, K. D., & Rhodes, T. L. (2017). On solid ground. Association of ­American Colleges & Universities. Melzer, D. (2014). Assignments across the curriculum: A national study of college ­writing. Utah State University Press. NILOA. (n.d.). Assignment charettes. https://www.learningoutcomesassessment.org/ ourwork/assignment-charrette/#1549481918909-e3d089cb-5051 Rhodes, T. (2010). Assessing outcomes and improving achievement: Tips and tools for using rubrics. Association of American Colleges and Universities. Soliday, M. (2011). Everyday genres: Writing assignments across the disciplines. Southern Illinois University Press. Tinberg, H., & Nadeau, J. P. (2010). The community college writer: Exceeding expectations. Southern Illinois University Press. Wiggins, G., & McTighe, J. (2005). Understanding by design (expanded 2nd ed.). Association for Supervision and Curriculum Development. Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K. (2016). A teaching intervention that increases underserved college students’ ­success. Peer Review, 18(1–2), 31–36. https://www.aacu.org/peerreview/2016/ winter-spring/Winkelmes Winkelmes, M., Boye, A., & Tapp, S. (Eds.). (2019). Transparent design in higher education teaching and leadership. Stylus. Wishkoski, R., Lundstrom, K., & Davis, E. (2018). Librarians in the lead: A case for interdisciplinary faculty collaboration on assignment design. ­Communications in Information Literacy, 12(2), 166–192. https://doi.org/ 10.15760/­comminfolit.2018.12.2.7

Driscoll et al_Advancing Assessment for Student Success.indb 130

03-06-2021 07:18:37 PM

APPENDIX 5A

Written Communication, Critical Thinking, and Information Literacy Assignment Guide (California State University, Monterey Bay)

T

his assignment guide is a tool educators can use to critically examine and improve their assignment guidelines for the purpose of helping students p ­ roduce better work. There is no expectation that assignments explicitly address all questions posed in the guide. Rather, each guide poses questions to help educators make decisions about what kinds of prompts to include—or not to include—in their assignment guidelines. As you review the assignment guidelines, respond to the following questions (generated from the written communication integrated rubric) and revise your assignment prompts if appropriate and helpful to students. Depending on course level and prerequisites, it may be reasonable to expect students to know how to respond without explicit prompting. Consider ­providing students with a work sample that illustrates excellent performance. Abbreviations: WC = written communication, CT = critical thinking, IL = information literacy. Issue/Problem (CT) •• Does the prompt define for students what is at issue, or should students define their own issue? If the latter, how explicitly does the prompt define for students the limitations on the appropriate range of issues? •• How explicitly does the prompt define the urgency/need for response? •• How explicitly and narrowly does the prompt ask students to define what is at issue in the task?

131

Driscoll et al_Advancing Assessment for Student Success.indb 131

03-06-2021 07:18:37 PM

132  

appendix

5a

•• What information does the prompt offer in terms of the audience’s background knowledge? Supporting Materials (IL) •• What kind(s) of sources are called for explicitly in the prompt (e.g., peer-reviewed literature only; are newspapers, magazines, blogs, and other forms of popular media acceptable; etc.)? •• What guidance does the prompt offer in terms of quantity and diversity of sources? •• How does the prompt engage students in establishing or questioning the credibility of cited experts and other evidence? Use of Support (IL) •• What does the prompt explicitly ask students to do with that information (e.g., analyze, synthesize)? •• What does the prompt explicitly define as the purpose of the information (e.g., provide background information, support multiple perspectives, etc.)? Position (CT) •• What role does the prompt suggest students should take in composing their responses (are they to be experts, mediators, friends, students, parents)? •• What does the prompt suggest in terms of the range of perspectives students should consider? •• How does the assignment ask students to identify their own and others’ assumptions? What different categories of assumptions are students asked to consider (e.g., empirical, value, normative)? •• What information are students given about the context in which they are composing? How does the prompt engage students in examining the assumptions relevant to that context? Genre and Disciplinary Conventions (WC) •• What information does the prompt offer in terms of the audience for whom students are writing? •• What relationship does the prompt suggest writers establish with their audience (peer to peer, expert to novice, supportive, confrontational)? •• What information does the prompt offer in terms of the purpose for which students are composing?

Driscoll et al_Advancing Assessment for Student Success.indb 132

03-06-2021 07:18:37 PM

appendix

5a  133

•• What guidance does the prompt offer in terms of level of formality and specialized vocabulary for the writing? •• What guidance does the prompt offer in terms of disciplinary and/or genre conventions and expectations? Conclusions and Outcomes (CT) •• How are students prompted to consider potential implications or consequences (intended or unintended) of their conclusions? •• What audiences are students prompted to consider in assessing the implications of their conclusions? Academic Integrity (IL) •• What guidance does the prompt offer in terms of ethical access and use of information? (e.g., personal data, clinical trials, animal trials) •• What guidance does the prompt offer in terms of reference and citation style? •• What guidance does the prompt offer regarding the balance of paraphrase, quotation, and summary? Grammar and Mechanics (WC) •• What guidance does the prompt offer in terms of expectations regarding grammatical correctness?

Driscoll et al_Advancing Assessment for Student Success.indb 133

03-06-2021 07:18:37 PM

6 USING EVIDENCE OF STUDENT ACHIEVEMENT Advancing Student Success Amy Driscoll

I

n the midst of so much high-quality assessment work across the country, it is disappointing to learn that all the development is not leading to improvements of a scale to match the effort. It’s disappointing and it is also puzzling. Even for those individuals and institutions who are committed to the improvement of student learning, there are few examples to support their pledge. I once heard Peter Ewell say that we are so determined to do assessment well—that is, write ideal outcomes, design quality assignments, carefully analyze the student evidence data, and, of late, communicate our results with transparency and clarity—that we stop before those important last steps, using the information to make changes and assessing again to determine if the changes work (Ewell, personal communication, June 4, 2018). I find myself in agreement, as I have observed groups of faculty struggling for perfect and unanimously agreed upon program learning outcomes (PLOs) for over a year. We have NILOA’s growing collection of collaboratively designed assignments that emerge from ambitious groups of faculty. And now we have entire chapters (ours included) devoted to guidelines and examples of how to communicate our assessment results to a wide range of audiences. In contrast, I have searched the prominent assessment literature, interviewed assessment directors, and skimmed assessment reports and found a paucity of examples of “closing the loop” and even less mention that assessment has been conducted to determine if the changes made have achieved improvement. Occasionally there is a small, individual classroom study documenting such processes in the scholarship of teaching and research and a few promising cases in assessment research. That last step, 134

Driscoll et al_Advancing Assessment for Student Success.indb 134

03-06-2021 07:18:38 PM

using evidence of student achievement  

135

assessing to determine if changes really improved student achievement of outcomes, is rare. I have some ideas about why this situation is common. Much like my thinking in chapter 1, there are some dominant trends that don’t support or encourage the use of our assessment data. Accreditation continues to be the driver and that can encourage compliance mentality for the process; however, it is clear that institutions are using varied data collection approaches with potential for actionable information (Jankowski et al., 2018). What we know then is that we have developed expertise in the processes that lead to a ­collection of high-quality student achievement data. What we may lack are the skills that it takes to determine what needs to be fixed or changed, or abandoned to improve students’ learning. It is not a “right answer” process. The answers may not be easily visible, or clear, or simple, or comfortable. And clearly, we may not have the expertise to locate or determine what is behind our disappointing results unless a situation is obvious. We also do not typically seek answers from our most appropriate experts—students There is also the issue of time, which we cannot ignore. Most campuses have a tight time table, as suggested by accreditation, for assessing one or two outcomes per year within programs and then moving on to the next outcomes the following year. Lost in this annual shuffle is the time and energy needed to probe the data for answers about how to improve. Once faculty identify those answers, then the changes must be made and once again student achievement data must be gathered. We are realistically talking about 3 years for each outcome if we are going to authentically learn from the assessment processes and improve student learning and even reflect on our processes. I realize that this process may take place more regularly in your individual courses in which you conduct informal analysis and make your course changes. Currently, the delayed use of assessment results has a more critical impact on using those results to support students who are struggling, who need additional support or tutoring. Maki (2017) urged us to close that assessment time gap if we are serious about addressing underperformance of students while they are still with us and addressing in “real-time” the obstacles they face in their learning. For that reason, we must work harder to “close the loop” if we want assessment to remain a consistent and valued practice. The reflections I’ve shared in this section have led to the development of this chapter.

Introduction to This Chapter and Its Intentions Writing this chapter is challenging but essential. It is clearly time to shift the attention of our assessment efforts to promoting improvement, to suggesting

Driscoll et al_Advancing Assessment for Student Success.indb 135

03-06-2021 07:18:38 PM

136  

advancing assessment for student success

change, and to providing countless impressive examples of using data for increasing success in student learning. We must motivate faculty and student collaborators to become expert sources of improvement possibilities and processes. Professional development efforts, somewhat dedicated to this end, need to shift from reviewing outcomes, assignments, alignment, and so on to the use of results for purposes of change and improvement. Fortunately, there is a growing awareness of the importance of that shift. I’ll get off my soapbox now and do my best to dedicate this chapter to three major efforts to turn the current situation around: 1. The first effort is a description of institutional culture and practices that engage faculty and staff more intentionally toward “closing the loop.” 2. The second effort is an author-designed taxonomy of categories—with much help from Assessment Leadership Academy (ALA) colleagues and workshop attendees and friends—categories of possibilities for improvement. 3. The third effort is advice for collaborations among faculty and between faculty and students to connect and use insights that yield “closing the loop” results.

To complete this chapter, I will summarize with descriptions of institutions that are committed to and moving toward the use of assessment data systematically and a set of steps they recommend for increasing the practice on our campuses. I will also provide some real and motivating examples of using the student learning data to make changes and assessing whether those changes are improving student learning. I am communicating with you, individual faculty, for consideration in assessing your courses and with you, assessment coordinators, for consideration in assessing programs, institutional learning outcomes, and planning professional development in assessment. It felt like “closing the loop” suddenly appeared one day in lieu of saying “using data to make changes for improving learning” and soon everyone was using the expression. The word “on the streets” is that the term was initiated from the thinking of Ralph Wolf ’s accreditation developments in 2002. Before we knew it, it was used across the nation in accreditation talk, in assessment publishing, and in campus reports. There were lots of ­visuals associated with assessment, primarily circles and loops, and so “closing the loop” made sense. It is noted and even described in those visuals, but the efforts fall short of finishing the last link of the loop. The other parts of the loop are abundant but the closure doesn’t happen.

Driscoll et al_Advancing Assessment for Student Success.indb 136

03-06-2021 07:18:38 PM

using evidence of student achievement  

137

Although more assessment evidence is available on campuses than ever before, information is not as widely shared as it should be and using it to guide institutional actions toward improving student outcomes is not nearly as pervasive as it should be. (Kinzie & Jankowski, 2015, pp. 73–74)

We learn that even the phrase “closing the loop” bothers some faculty. Recently I read or heard that occasionally faculty shudder that it sounds like administrative demands. For the rest of this chapter, I will refer to the process as “using student achievement evidence to improve student learning.” It is more authentic about what we are doing on an ongoing basis, and the language reminds us of the importance of student learning as the motivation. I want to follow up on the “gloom and doom” theme of my reflection and the occasional discouragement of publications to report some encouraging news from the recent 2018 provost survey (Jankowski et al., 2018). The results hint at some of the earliest and perhaps easiest approaches to using student achievement evidence, and that’s a good place to begin. Provosts and other administrators report that the three most valuable sources of assessment data are embedded classroom-based performance assignments and assessments, use of rubrics, and national student surveys. The focus on classroom-based assessment sets a supportive stage for using data to improve student learning in the kind of “real time” Maki encouraged. Provosts also identified the most common needed institutional focus for supporting student learning assessment as the promotion of faculty use of the results of assessment and professional development for faculty and staff dedicated to that use (Jankowski et al., 2018). Along with the acknowledgment of those needs, provosts are intent on faculty development to not only “develop the attitude and tools to produce actionable results” but also the “skillset to use results to improve student learning” (p. 13). I intend for this chapter to contribute significantly to those needs. As I noted in chapter 1, we need to have both empathy and admiration for faculty who have completed graduate work without preparation for teaching and curriculum development. With graduate preparation that included curriculum, pedagogy, and assessment, faculty could then dig into their knowledge base of pedagogy to analyze the student evidence data and to create new approaches or practices to improve learning. This chapter will begin to remedy that challenge. First, we begin with a description of an institutional culture and related practices that will support faculty engagement and development of expertise while encouraging innovation and risk taking, inquiry, and substantive collaboration in assessment.

Driscoll et al_Advancing Assessment for Student Success.indb 137

03-06-2021 07:18:38 PM

138  

advancing assessment for student success

An Institutional Culture That Supports the Use of Student Achievement Evidence to Improve Student Learning The importance of such a culture is obvious when one considers some of the obstacles to faculty engagement with assessment. From the faculty standpoint, assessment is often put upon them by institutional leadership and accreditation requirements, and carried out without their own voice in the planning. A less expressed obstacle is the worry that many faculty have about the potential that assessment data will be used against them. Added to that is the “full plate” of faculty responsibilities and the sense of one more job with assessment on their plate. And finally, seldom in the institutional reward system is there value for the assessment efforts, especially when the evidence is not as good as expected. My title of the culture is a bit long—I like Linda Suskie’s (2015) “culture of betterment” (p. 197) that hints at the potential of the culture. Let me address the institutional culture with respect to developing faculty ownership of assessment and the need for assurance that assessment data will not be used against them. I recommend gathering prominent faculty (not necessarily with titles) but faculty who are listened to or might be called influencers in our online culture, and task them with outlining the purpose(s) of assessment for their institution. They will be cautious in their choice of language, eliminate purposes that are not wanted by faculty, and maintain an emphasis on student learning. They might want to review purposes from other institutions for ideas. Once they agree on a purpose specifically designed for their university or college, it needs to be made very public and/or approved by the faculty, or used as a reminder for all assessments to represent the culture for assessment. If work begins to veer from the purpose, there will be plenty of reminders from colleagues to stay true to the purpose. I am sure that most of the faculty at CSUMB do not remember the purpose statement we developed very early in the institution’s history when faculty were hesitant about what could happen with assessment data. A group of unofficial leaders gathered and came up with a statement: “Assessment is intended to enrich, extend, enhance, and expand student learning.” I don’t know why E’s dominated the statement, but I share the purpose statement because it demanded that assessment be a learning experience for students with those verbs. It definitely kept out assessments that didn’t promote learning, and it was a good screen. Here is a simple example of an assessment purpose statement from the Georgia Institute of Technology (Georgia Tech): “The primary purpose for assessment at Georgia Tech is improving student learning and development. This is accomplished through improvements in program structures,

Driscoll et al_Advancing Assessment for Student Success.indb 138

03-06-2021 07:18:38 PM

using evidence of student achievement  

139

course content, and pedagogy” (Georgia Tech, 2021, para. 2). University of California, Merced chose to develop a set of principles, which you may prefer to a statement. They are clearly representative of the university. University of California Merced Principles of Assessment 1. The UC Merced community engages in assessment as a means of inquiry that reflects and draws on the values and practices of a research university. 2. We engage in systematic, ongoing assessment in order to continuously improve undergraduate and graduate student learning, student success, and the achievement of our research mission. 3. We apply scholarly practices to ensure that we assess what we value and not value only that which we can easily assess. 4. We share our assessment activities in ways that preserve our focus on candid engagement in improvement-oriented inquiry, facilitate the exchange of practices and meaningful insight, and address external expectations for accountability. 5. We engage our undergraduate and graduate students in the extension of our disciplinary research cultures to the exploration and evaluation of learning, teaching, and the services that support our educational and research enterprises. 6. We recognize that to have meaningful impact and to be sustainable, assessment must— a. Originate in the work we already do; b. Support institutional priorities; c. Be integrated into core institutional planning and decision making ­processes, including the allocation of resources (budgeting); d. Be included in expectations for professional practice; and e. Be advocated by leadership. (UC Merced, 2013)

The collective processes to develop such principles or a purpose statement will engage faculty and students in the kind of discussions that are found in inquiry cultures. Those discussions also encourage faculty and student ownership of assessment and prepare them to develop and use student achievement evidence. Think about being part of the faculty at UC Merced and how you would consider your assessment decisions with those principles guiding your work. Linda Suskie (2015) has reminded us that promotion and tenure guidelines that honor assessment efforts are probably the most powerful way to encourage faculty participation in and ownership of assessment. It’s a challenging hill to climb but with enough pressure from faculty and some administrative support, it is a possibility. In a recent case study of Washington State University, it was reported that “assessment is now being incorporated

Driscoll et al_Advancing Assessment for Student Success.indb 139

03-06-2021 07:18:38 PM

140  

advancing assessment for student success

into the faculty recognition and reward system” (Hutchings, 2019, p. 4). That decision is based on the belief that the “intellectual work that goes into meaningful assessment and improvement should be recognized in more formal, institutional ways, not simply as service, but as a dimension of effective teaching” (Green & Swindell, 2018). Linda Suskie (2015) has also suggested exciting ideas for honoring authentic assessment efforts—a set of grants or awards for faculty whose data is disappointing or indicative of learning gaps. The grants are designed to support the work of “using evidence of student achievement to make changes or to improve” (pp. 198–199). There is also a place in our annual faculty presentations or poster days to honor those efforts to use data to improve student learning. I urge you who are in leadership roles to explore ways to communicate that those efforts are significant and so important for student success.

Before We Even Begin . . . Returning to early recommendations for evaluation and assessment, we find sage advice. Before designing outcomes and assignments, we need to lay the “groundwork” for our efforts. That groundwork consists of posing important questions, starting with that one about purpose and moving from there: What is the purpose of this assessment work? Relevant purposes may include one of the following: •• Becoming acquainted with our students and understanding how they learn •• Determining the effectiveness of a new program •• Demonstrating the need for increased resources •• Providing data for marketing •• Providing information for future employers •• Determining focus for professional development More importantly, the purpose will be stated in the language of faculty and with clear intentions for the institution. How will the data be used? Who will use it? (determining the intended uses and users): •• •• •• ••

Boards of directors may use it to promote programs or institutions Administrators may use it to respond to criticism Faculty may use it to focus improvement efforts Program administrators may use it to inform employers

Driscoll et al_Advancing Assessment for Student Success.indb 140

03-06-2021 07:18:38 PM

using evidence of student achievement  

141

Again, the process of naming the users will remind faculty of the purpose and intentions of their assessment work. Sounds so simple! Those questions are not easily answered because we have little practice in coming up with responses. We are used to that accreditation driver and haven’t engaged in the kind of discussion those questions require. Again, there is probably no right answer to those questions, and that probably is a good thing. A brainstorm of ideas would get the thinking started and prompt an openness to the multitude of possibilities for the intentions of assessment. This is also an opportunity to weave the uniqueness of the institution into the answers. Another starting point for questions is to explore what terms such as assessment, program assessment and review, program assessment reports, and even learning outcomes and rubrics mean. Even though we are all using those terms, they can have different meanings at different institutions or in different programs and for different individuals. The definitions, purposes, structures, and integration of those processes and practices are not universally understood (Eggleston, 2020, p. 4). Eggleston urges us to begin our planning ­sessions with an exploration of terms. Probably the term assessment has more definitions and examples than most of the language and is probably the best starting question. From there we may move to assessment at different levels, for different purposes, and in different forms. The discussion will be lively, stimulating, and surprising and hopefully will yield a shared definition. From there, faculty and sometimes students need to pose questions to be answered with the assessment data. I regularly conduct question-raising activities in my workshops and consistently encounter hesitance and cautions about raising questions. I note puzzled expressions and participants asking for more information about what I want them to do. Occasionally, I resort to sharing examples of the kind of questions I think they may have. Ultimately, with enough perseverance, we can begin creating a culture of inquiry. In such a culture, it will be typical to raise questions in meetings, in budget decisions, in planning sessions, and hopefully in the process of analyzing student evidence data. It is important to note that there is much more use of student achievement data in an institution with such a culture. When faculty are accustomed to answering questions with student achievement data, they are more apt to raise questions again when viewing their data. It’s more likely that they will be curious about the reasons behind their student achievement data. It’s a situation that needs to be repeated and repeated so that it becomes how faculty work with assessment. And hopefully, their answers will be checked out with a follow-up assessment once changes have been made. Again, a curiosity has developed that will urge faculty to check on whether their changes or improvements have made a difference.

Driscoll et al_Advancing Assessment for Student Success.indb 141

03-06-2021 07:18:38 PM

142  

advancing assessment for student success

When I find faculty reluctant to pose questions, I’ve experimented with an alternative approach—asking them to predict what they will find or learn from the assessment data. It helps to ask them to predict what learning will be strong in the data and what learning might be a struggle. Again, we have data to show that when you pose questions that are important before you begin assessment, you are more likely to use the assessment data. Nelson Graff reminds us that when “students predict what they will see in a demonstration experiment, they remember the experiment better” (Graff, personal communication, February 12, 2020). I’ve often observed faculty approach the prediction task more easily than the inquiry task. Then there’s the curiosity about whether their predictions are accurate, which may prompt the next step—using the information to improve learning and, again, hopefully checking that improvement has happened. Kinzie et al. (2017) have provided institutional examples that are motivating and convincing of this approach to involve faculty in assessment data use. Carnegie Mellon University is one of those institutional examples in which “assessment is driven by faculty questions about student learning and effective teaching” (pp. 68–69) and much of the effort is supported by the Center for Teaching Excellence in coordination with deans, department heads, and faculty leaders. In that one sentence description are significant ­lessons—­faculty questions about student learning drive the work, a center supports the work, and leadership is involved in the support. Bresciani Ludvik (2018) has described what I have been leading up to with her description of “a learning organization that is engaged in continuous collaborative and reflective inquiry and dialogue, finding ways to improve its inquiry and dialogue processes, as well as committed to the professional development of all of its people” (p. 81). She takes it further with her recommendation for “authentic generative questions posited to investigate where improvements can be made or to explore what other questions need to be asked” (p. 81). If you know Bresciani Ludvik her additional description fits her spirit for assessment, as she has recommended that faculty engage with “a passion to discover how to improve and a playful curiosity in discovering how to improve it” (p. 81). She was speaking to individual faculty with that recommendation but it also provides wisdom for faculty groups or professional development sessions. If you are struggling with assessment and want to improve your courses using your student evidence, I urge you to find a group or seek one other colleague to be able to use the recommendations of this chapter. A response to Bresciani Ludvik can be found in “communities of ­practice”—groups of faculty committed to learning together, addressing common issues, and dedicated to a project. They are usually together for an extended time period and consequently are able to develop the kind of

Driscoll et al_Advancing Assessment for Student Success.indb 142

03-06-2021 07:18:39 PM

using evidence of student achievement  

143

inquiry that she has described. They have the time and the camaraderie to accomplish significant work. Faculty learning communities are similarly organized and effective as contexts for professional development. Kezar (2018) called them professional networks capable of “stimulating organizational learning, creating change, embodying commitments, and channeling further learning” (pp. 231–233). She affirmed their potential for increasing attention to teaching, much like our experiences at CSUMB. In those learning communities, important relationships are the basis for collaboration, for experimentation, and for affirmations. CSUMB has organized and facilitated “faculty co-ops” since the opening of the campus (over 20 years). Another community at the campus is the University Learning Outcomes Scholar Program, which is focused on assessment at the institutional level. Again, long-term collaborations yield the kind of conversations, consistent inquiry, and experimentation that support capacity for using student achievement evidence for improvement. Earlier when I was describing the characteristics of Carnegie Mellon’s work, I noted that there was support for their assessment work. I think it is worth noting that there are effective ways to garner support from your administrative leadership. You might share the vision for assessment you have developed with those faculty communities and the questions you intend to answer. You might also describe the projects and experiments created in those communities. Most importantly, be sure that your leadership understands how important it is to be focused on improvement and assessing its success. That recent study of leadership we reported earlier tells us that higher education leaders are increasingly aware of the need to do this work so you will be affirming its importance. Another way of thinking about the characteristics of this culture is the “economy of sharing” which Hutchings (2019) pointed out in her description of the processes at Washington State University in their progress toward institutionalized assessment. One aspect of the sharing is simply the process of gathering groups to share their assessment work, findings, and so on—groups like deans, assessment directors, subcommittees, and so on with leadership from the director of their teaching center (p. 3). Dan Shapiro will elaborate further in chapter 8 when he writes about communication. Ultimately, we all flourish in a culture like we describe and, ultimately, innovations in pedagogy and assessment may emerge, collaboration for exciting projects or studies may increase, and faculty ownership of assessment may be prominent. And it may be the answer to our issue of needing to use student achievement assessment for its primary purpose—to improve student success. Surrounded by that supportive culture, faculty may be ready for a set of ideas on what needs to be changed or revised and the determination to

Driscoll et al_Advancing Assessment for Student Success.indb 143

03-06-2021 07:18:39 PM

144  

advancing assessment for student success

reassess to determine their success. Those possibilities are motivation for the taxonomy that follows.

A Taxonomy of Possibilities for Using Student Achievement Evidence for Improvement I have been in awe of and have appreciated Benjamin Bloom’s work for a long time. His taxonomy of learning (Anderson & Krathohl, 2001) provides an easily understood basic representation of the range of cognitive operations involved in learning that our students can achieve in our courses and programs. And then, he and/or his followers defined those levels and provided endless verbs for each level. He paved the way for faculty, who may not have much background in learning, to design outcomes to describe the learning they hope their students will achieve at an appropriate level. The verb collection promotes specificity in those outcomes, which enables both faculty and students to focus their learning efforts. Years ago, in our initial attempts at writing student learning outcomes, I am certain that Bloom saved us hours and probably days as we first wrote outcomes for our courses and later for our programs. I believe that his taxonomy also boosted faculty confidence in their early efforts. You will have experienced information about the taxonomy in chapter 3, with lots of examples. If by some chance, you have not been introduced to Bloom’s taxonomy, please look for it while reading this chapter. Before I continue, I return to one of the situations I described in ­chapter 1. I’ll remind you that the pace of assessment is a dominant force. It’s especially fast paced when studying a program with focused assessment of one or two PLOs each year, followed immediately by another two outcomes. That pace comes with fatigue and little enthusiasm for determining how or what to improve, followed by another round of assessment of the change. The kind of reflection and collaboration needed demands real focused time. I am hoping that a taxonomy may help the time issue much like it did for our early efforts to design outcomes. Another reason for the lag in using our student achievement evidence goes back to our acknowledgment that often faculty have not been prepared to teach, to assess, or to develop curriculum. My stance is that using student evidence to determine how to improve their learning requires a strong base of pedagogy, of curriculum, and of assessment. Thus, I propose the “­taxonomy of possibilities” in Figure 6.1 to engage in the first step of using student achievement evidence. My hope is that this taxonomy can inform both individual faculty and groups of faculty of the possibilities for improvement

Driscoll et al_Advancing Assessment for Student Success.indb 144

03-06-2021 07:18:39 PM

using evidence of student achievement  

145

while saving time in the process. It provides a basis for analysis and collaboration and, ultimately, decision-making. My intent for the taxonomy is that it will serve to motivate faculty and students to discuss the possibilities for improvement. I have also used the process of developing the taxonomy as a faculty development process and find that when faculty work in small groups, they can create their own taxonomy very well. I recommend having your colleagues create their own taxonomy, as the process is engaging and rewarding. Colleagues have described employing the approach with success in generating interest for using student achievement evidence to improve learning. It can be a simple process that begins with the categories of the taxonomy in Figure 6.1 on large sheets and groups of faculty brainstorming possibilities. Or for individual faculty, the taxonomy in this chapter can serve as analysis of your assessment evidence well and provide possibilities for improving your courses.

Taxonomy of Possibilities: Focusing on Improvement Potential For me, the taxonomy began as I studied the paucity of examples from varied programs—examples of strategies or changes that have been identified as possibilities for improving student learning. They were all over the map in terms of categories—curricular changes, pedagogical changes, assessment changes, along with changes to admission standards, changes in the number of required credit hours, changes in program sequences, and changes in requirements. It was initially exciting to see the possibilities, but then it felt overwhelming. I began to think about how to make it manageable for faculty. During this time of information gathering, I also had the good fortune to listen to student ideas about why evidence demonstrated that they had not learned well or not at all. I watched as faculty listened and nodded their agreement with the student responses. Using both faculty and student ideas, I identified categories for the taxonomy in Figure 6.1. The categories and the items within each are not comprehensive, but they give faculty starting points and are open to expansion if our readers come up with other ideas and even other categories. The basic categories with brief explanations include the following: 1. Students (Support, Experiences, Readiness) 2. Outcomes (Clarity, Intentions, Level) 3. Curriculum (Content, Understandings, Skills) 4. Pedagogy (Teaching and Learning Approaches) 5. Assessment (Measuring Achievement of Outcomes) 6. External Alignment (Fit with Discipline, Careers, Workforce)

Driscoll et al_Advancing Assessment for Student Success.indb 145

03-06-2021 07:18:39 PM

146  

advancing assessment for student success

Figure 6.1.  Taxonomy of possibilities for improvement. Student Match Lack of life experiences connected to course Lack of confidence in course learning No opportunity to question or probe course content Reason for taking the course is unclear Previous courses did not prepare students for new material Lack of motivation Lack of connection of course material with other learning experiences Sense of belonging is missing in the course or program Previous academic experiences were negative Few relationships or family support Challenges and responsibilities interfere with studying Poor or lack of advising for course selection Student information is not informing curriculum/pedagogy/assessment Student data not disaggregated for identification of student groups (first generation, race/ethnicity, full or part-time attendance, employment, socioeconomic group) Lack of support for students (financial, food/housing needs, disability resources, wellness support, cocurricular support) Lack of student involvement in assessment decisions Outcomes Level of outcomes is too complex or too simple Outcomes need modification (loosening or tightening) No alignment of curriculum/pedagogy/assessment No review or attention to outcomes as a foundation for learning No clear definition of the outcome(s) Students unaware of outcomes of course or class session Outcomes do not reflect current disciplinary changes Curriculum No repetition of important concepts, skills, or other material No representation of outcomes No clarity of focus or importance No connection to students’ lives Lacks examples or nonexamples Unintentional redundancies Lacks intern/externships or practical experience Lack of real world application Lack of connections between content (coherence) No representation of diversity in examples, anecdotes, etc. Pedagogy No repetition of important concepts, skills, or other material No scaffolding of complex content No connections within course material No connections to other courses Ineffective course materials (handouts, visuals, etc.) No interactive processing of ideas, concepts, etc. Little or no practice opportunities Few or no real-world examples Lacks system for checking on students’ understanding

Driscoll et al_Advancing Assessment for Student Success.indb 146

03-06-2021 07:18:39 PM

using evidence of student achievement  

147

No attention to the diversity of students No opportunity for student agency in learning approaches Class size interferes with active learning Need supplemental instruction Lacks active learning Technology interferes or distracts Group work is not organized or supported Pedagogy not culturally relevant Mode of instruction (hybrid, online, etc.) not matched to student need Assessment Poor directions for assignments or exams Lack of motivation for completing assignments Lack of alignment with outcomes, pedagogy, etc. Little or no practice prior to the assessment Questions in exams and/or assignment need revision Not enough data for determining problems Rubric not helpful for students Rubric not well aligned with outcomes, assignment Rubrics lack specific descriptions of desired qualities, skills, etc. No attention to equity in assignments No student agency in choice of assignment Unreasonable deadlines Organization and Logistics Scheduling doesn’t accommodate students Student evidence data not used Sequence of courses doesn’t support learning Classes too full Classroom doesn’t support interaction Classroom lacks support equipment Little or no resources Faculty lacks disciplinary preparation for course Faculty lacks pedagogical/assessment preparation Program/course outdated, needs revision

Within each of the categories, I’ve listed possible reasons for assessment results that do not demonstrate student achievement and that suggest possibilities for improvement. You will be able to review all those possibilities in Figure 6.1 and I encourage you to do so with some colleagues. I feel certain that you will add to those lists and hopefully have some “aha” moments with respect to your own students’ evidence. I want to extend the possibilities of the taxonomy with a few stories. A disappointed science professor brought his student work to a faculty group who were analyzing student evidence. He was frustrated with the lack of depth in student responses, especially because he had “really worked with the class on environmental issues.” One of our faculty group looked at the exam question and said, “Did you mean for your students to list the issues

Driscoll et al_Advancing Assessment for Student Success.indb 147

03-06-2021 07:18:39 PM

148  

advancing assessment for student success

in the environmental debate?” Stunned, he responded, “No, I wanted them to describe and begin to analyze the issues.” Immediately he looked back at his directions that asked students to “list” with embarrassment. “No wonder they just listed the issues.” In this case, the possibility is in the taxonomy category of assessment with its “Poor directions for assignments or exams.” Another of my favorite stories is one about a group of faculty meeting to review student work—not norming or calibrating, just searching for ideas to improve their teaching. One of their common outcomes had the verb synthesize in it. We finished the morning and as we stopped for lunch, we agreed that we had not seen synthesis in any of the student papers. One of the faculty members said sheepishly, “I’m not sure that I know how to teach students to create synthesis.” Everyone chimed in, agreeing with the same concerns. One member volunteered that we would find synthesis in her students’ papers, which would be reviewed in the afternoon, and we did. After reviewing several examples, the group asked the faculty member if she would share ideas for their teaching. It was quickly scheduled for the entire campus, as we assumed that there would be high interest among many faculty. It was probably one of the most expansive uses of student evidence. We would find this possibility in the taxonomy category of “Pedagogy” as “Lack of alignment with outcomes, pedagogy, etc.” This story also provides a good example of collaboration, which we will urge in the section that follows.

Faculty/Faculty Collaboration As with most assessment decisions and practices, we strongly recommend collaboration. Assessment is best when several perspectives are considered. It is consistently recommended that PLOs and their meanings should be agreed upon by all department faculty. Otherwise, it is quite shocking how differently individuals interpret an outcome. And we cannot ignore the potential for confusion among students if faculty define or describe an outcome differently. The conversations about outcome meanings are powerful, and many faculty come away stunned by the variation in definitions. Such conversations must precede any collaboration to use assessment information to improve student learning. I have to stop briefly and discuss the issues that influence collaboration among faculty. The situation is somewhat similar to what we do with students: put them in groups and expect them to collaborate. Instead, we must prepare both students and faculty for the complex process of collaboration. For faculty, this process is fraught with politics, jealousies, tenure

Driscoll et al_Advancing Assessment for Student Success.indb 148

03-06-2021 07:18:39 PM

using evidence of student achievement  

149

reviews, institutional and departmental history, and faculty status (tenured, adjunct, etc.). Hopefully for students, the process may be simpler without some of that surrounding tension. So, without writing a book on collaboration, I’m going to offer a few suggestions to those who lead or end up on assessment committees. We want such participation to be productive, efficient, and informative as well as a satisfying professional experience. Here are some starting points: 1. In your first meeting, ask members to introduce themselves with what contributions they will bring to the committee’s work. The responses are often surprising and assuring to the group. I’ve often learned promising information for the group’s success. 2. In that same first meeting, ask members to describe their experiences in a very productive committee and what made the group work well (a kind of appreciative inquiry approach). 3. Follow that discussion (with ideas listed on a board for the group) with the question “How can this committee achieve the qualities of past positive experiences of the group?” 4. Then, of course, ask about opposite experiences—collaboration that did not work—committee meetings that they dreaded attending, a committee from which they resigned due to ineffectiveness, and so on. They will have no problem or hesitation answering this question. 5. Follow this, of course, by asking how this group can avoid such an experience with the new assessment committee or other collaborative group. 6. Spend enough time establishing an agenda for the committee, setting goals and intentions, determining a final product or report, making a schedule, and maybe even creating a rubric for the work. 7. Develop an agreement about what to do if the committee is stuck, or at odds over a decision, or losing momentum, or worn by the meetings, or . . . ? Be sure to involve the group in this process so that the agreement reflects their thinking. 8. Consider expectations of the leader(s) of the committee—when they should make decisions, when they should stop the group, when to move the agenda along, and when to check in with the climate (positive or negative).

In addition to those suggestions, it is important to determine the composition of such committees. Be sure that there is one or more faculty with assessment expertise, one or more faculty who are advocates of assessment, someone well respected across the institution, and someone known for

Driscoll et al_Advancing Assessment for Student Success.indb 149

03-06-2021 07:18:39 PM

150  

advancing assessment for student success

“getting things done.” Those are basic membership criteria and depending on your institutional culture, there are others to consider. All of my recommendations are quite generic and relevant for all kinds of committees and collaborations. In the Diversity Project at Chapman University, the planning process often involved enormous committees with over 200 total participants, and so they designed a Task Force and Advisory Group Dialogue Guide as part of the campus “Roadmap of Best Practices for Diversity, Equity, and Inclusion” (see chapter 2) (Slowensky, 2019). Their Guide (too long to include in this chapter) includes respectful foundations of behaviors for all members, recommendations for facilitators, and expectations for group meetings. It is an amazing set of strategies, and I urge you to study the guide and use the ideas. Joe Slowensky (ALA) is listed as a resource person in chapter 2 about equity and can be contacted for these ideas. The big idea of this section is to attend to processes of collaboration and discussion so that when faculty do engage in those processes, they experience ­satisfying and valuable conversations and time well spent. The importance of collaboration has been transmitted to decisions about space and even furnishings. Jody Cormack, vice president for academic programs and dean of graduate studies, and Sharlene Sayegh, director, program review and assessment, both at CSU Long Beach (both ALA) have proposed collaborative spaces in their plans for the new Office for Effectiveness, which sounds so significant for the work that is intended for that office (Cormack & Sayegh, personal communication, November 2, 2019). I recently browsed a brochure for the Global Furniture Group and the “big idea” in furnishings for higher education is using spaces for collaboration between students, between faculty, between administrators, and for combinations of those groups. The brochure and “sales talk” encourage a “sense of community, creativity and connection” with both learning spaces and meeting spaces (P. Conley, t­ erritory manager, personal communication, February 7, 2020). From there, let’s return to the focus of the work, collaboration for assessment, in which some other processing must occur. Much like I recommend with most faculty assessment conversations, check for resistance or concerns about assessment. If nothing more, let participants speak candidly about their responses to assessment. Do not disagree. Listen and affirm their responses: It sounds like you share concerns about faculty’s “full plates” as well as your own packed schedule, and this committee adds one more responsibility for you. I think that is legitimate, and we will try very hard to be as efficient as possible. Will you track that intention and let us know if we are using time well or if we need to trim our discussions.

Driscoll et al_Advancing Assessment for Student Success.indb 150

03-06-2021 07:18:39 PM

using evidence of student achievement  

151

The beauty of involving faculty in the decisions of using data to improve student learning is that they have the opportunity to “make a difference.” Be sure to have some very motivating examples of what can happen with the decisions this group makes. In many institutions, the faculty senate reviews and approves assessment plans and approaches. Connections and engagement with the senate is important for an assessment director and/or committee. Omar Safie, director of evaluation and assessment at University of California, Riverside (ALA) engages with various groups such as the Committee on Educational Policy, Committee on Courses, and the Graduate Council as a kind of collaboration to be sure that messages about assessment are consistent campus-wide. He sees his role on committees and councils as a partner, collaborator, and “encourager” (his word). Safie is determined to promote the importance of wide collaboration and communication for assessment to engage the institution (O. Safie, personal communication, November 8, 2019). In all kinds of communication with faculty, administration, and students, it will be wise to include examples of very minor changes that have exciting results or big changes that significantly improve student success. Include some hard decisions like those that faculty made to improve the success of their graduates in the following story. At Victor Valley College, there is a very strong and popular program for preparing future emergency medical technicians (EMTs) and paramedics for an abundance of available jobs. The community college program was well designed with clear PLOs and careful assessment of students’ achievement of those outcomes. Most students were successful in demonstrating outcomes related to medical information and procedures, but one outcome created a problem for many learners: “Apply leadership and communication strategies to effectively manage an emergency situation.” The faculty experimented with a series of learning experiences to prepare students to achieve that outcome, but there were many students who still did not demonstrate the strategies needed to manage emergencies. After several years of changes, innovations, and other efforts, the department altered the admission process for the program. They used a unique evaluation of candidates for the program that demonstrated potential for leadership, personal responsibility, and social awareness. It became clear that the age of applicants was often related to their evaluation and that applicants with ­limited work experiences struggled with emergencies. The department administrator typically urged unsuccessful applicants to work a few years and then return to reapply, and they were often successful upon return. Their maturity influenced their capacity to succeed (D. Oleson, personal communication, November 17, 2018). It was obvious in meeting with the program faculty that they were genuinely concerned about the achievement of their students and were determined

Driscoll et al_Advancing Assessment for Student Success.indb 151

03-06-2021 07:18:40 PM

152  

advancing assessment for student success

to support their success. It was also obvious that they had many discussions, plans, and trials of experiences to promote that success. There was a lot of pride in being able to make changes that ultimately supported their prospective students. It would have been interesting to watch that program if they had included students in their sessions or faculty from other related programs. If you involve students in your assessment committees, and we hope you do, they will learn great life lessons from participating in an effective faculty collaboration. I am certain that they will also contribute worthwhile ideas or effective alternatives. One of the ongoing blocks to collaboration is the kind of silo situation that exists on some campuses. There’s a feeling that those outside our discipline don’t have the same understandings, practices, or intentions and may not be able to contribute to the planning. There may be some truth in those feelings, but it shouldn’t stop us from collaborating. Jennifer Hirashiki’s (2019) pairing of faculty can inspire any efforts for breaking down that obstacle of differences. In chapter 3, Swarup Wood described the project conducted by Jennifer Hirashiki (ALA), director of learning, innovation, and teaching excellence at Westcliff University, in which she paired faculty from an MBA (master’s in business administration) program and from an MA (masters of arts in Teaching English to Speakers of Other Languages [TESOL]) program to share their assessment evidence (student work) and analyze the results. Faculty from both departments were urged to bring their “strengths, open mindedness, creativity and knowledge” to the process. They found commonalities and differences in their student work, but those differences led to intense conversations about pedagogy and curriculum. As hoped, the cross-disciplinary work brought multiple voices, collegiality, and collective wisdom to the relationships between the departments. They gained insights about assessment, about the other department’s curriculum, and about their own students. Swarat (ALA) and Wrynn (2018) recommended such processes—“they enable faculty to see student learning from multiple perspectives” (p. 4). Hirashiki’s project also encouraged the kind of “constructive conversations” that Hutchings (2010) urged to explore the “meaning and implications of assessment” (p. 6). Without varied kinds of collaborations, I don’t think that we can achieve those kinds of conversations. And vice versa.

Faculty/Student Collaboration For me, the process of analyzing student achievement evidence is a critically important area for collaborating with students. Student ideas are insightful,

Driscoll et al_Advancing Assessment for Student Success.indb 152

03-06-2021 07:18:40 PM

using evidence of student achievement  

153

practical, and helpful. My commitment to their collaboration began when observing a group of students who were invited to review data from assessment of an institutional learning outcome in multiple departments. Each department had designed their own approaches to teaching and assessing the associated learning. The first department of faculty were disappointed with their data and, as you heard earlier, they were confused by the fact that the content was taught in their introductory course and never heard again. Another department had mixed results, and an individual student responded, “Speaking for myself, that question in the exam didn’t ask me for the right information. It didn’t ask me what you were looking for. I actually know a lot about the topic and could have provided a better answer.” That was a tough one to hear, but as the faculty began discussing the exam question, it didn’t take long before they agreed with the student. Together they designed a new question for future exams, and both students and faculty appeared very satisfied with their new inquiry. What we know about students is that most high achievers know enough to put extra information in their responses to exam questions even when the question doesn’t seem to ask for it. Remember that science faculty who asked his students to “list the arguments for” a concept and when most students followed his directions, he was disappointed. He didn’t mean list; he meant to ask for definitions, for descriptions, for analysis even. Students with experience in academic culture did not stop at listing but went on to describe and give examples. They have the confidence and a kind of exam savvy to add extra information. They do the same with assignments, and their work is so affirming for faculty to review while students who carefully follow our directions may produce work that is disappointing. Nelson’s chapter 5 on assignment prompts will help you immensely in designing assignment questions and directions that support student success.

What Keeps Us From Moving Forward? With all of the informative and motivating examples and stories I’ve ­provided, there is a serious missing link. Even when faculty and students come up with appropriate and significant changes or improvements for their students’ learning, they have a tendency to move on. After all, there are five more PLOs to be studied. There is much enthusiasm for coming up with possibilities for improvement, but little interest or energy for conducting assessment again. We must focus our efforts to assess one more time to determine whether our possibilities did result in improvement. One would think that the group that came up with an improvement strategy would be

Driscoll et al_Advancing Assessment for Student Success.indb 153

03-06-2021 07:18:40 PM

154  

advancing assessment for student success

highly motivated to learn whether the change made a difference. Otherwise we may not be improving; we may be simply changing. A number of studies have focused on answering that question about why this is happening. The Wabash study (Blaich & Wise, 2011) focused on that dilemma as the researchers realized that participating institutions struggled when trying to identify and implement changes in response to study data. Much as Ewell’s comments noted earlier in this chapter, the study associates became clear that the most challenging aspect of the assessment process occurred when faculty, staff, administrators, and students tried to use the evidence to improve learning. The study focus was changed to learn more about that challenge. The researchers translated their experiences into lessons that will help institutions use the evidence collected to benefit student learning. They recommended several practical steps to achieve use of student achievement data: 1. Develop careful communication plans so that a wide range of campus ­representatives have an opportunity to engage in discussions about the student achievement evidence, 2. Use these conversations to identify one, or at most two outcomes on which to focus improvement efforts, and 3. Be sure to engage students in helping you make sense of and form responses to assessment evidence. (Blaich & Wise, 2011, p. 3)

Hopefully, Step 3 is on the top of your list at this point in the book. The Wabash study also provided examples of good practices and conditions that have the potential to improve student achievement evidence as a starting point, so that students achieve better at the outset. They listed classroom practices for individual faculty and staff: 1. Checking to see if students learned current material before moving on to new material, 2. Designing and communicating clear explanations of their course or program goals and requirements, 3. Providing timely feedback, 4. Challenging students to analyze and synthesize information and make ­judgments about ideas, experiences and theories;, and 5. Engaging in high-quality nonclassroom interactions that influence students’ growth, values, career aspirations and interest in ideas. (p. 10)

The Wabash study list goes on, but I intentionally selected those practices and conditions that could serve as possibilities to improve a course or ­program. I also believe that students provide helpful feedback regarding those

Driscoll et al_Advancing Assessment for Student Success.indb 154

03-06-2021 07:18:40 PM

using evidence of student achievement  

155

possibilities—feedback that could inform change and potential improvement. The kind of changes connected to those possibilities would not be difficult to address. In the next section, I describe how and why to use student work to guide our assessment practices.

Motivations for Using Student Evidence to Improve Learning At the beginning of this chapter I promised to provide examples and ­scenarios that would encourage and motivate you to engage in using student achievement evidence to improve student learning. Before you hear those stories, I want to share with you the work of a faculty member at CSUMB, Corin Slown (ALA), assistant professor. She spontaneously designed a framework for faculty as they engage in using student work to improve learning. She has really attended to how we treat students in the assessment process and provided principles to protect students. I have not included the entire framework, but her unique set of Guiding Principles for Using Evidence to Improve Student Learning (Slown, 2019) will be good reminders for your assessment collaborations. Guiding Principles 1. No harm can be done to faculty or staff or students during the assessment process. As the Latin root assidere suggests, we believe that assessment is a process that is done with and not to our faculty and students. Using assessment effectively means we work alongside programs to support student ­success by improving learning; 2. Improvements or changes in our practices are evidence-based; 3. Improvements or changes are implemented at course, program, and/or ­institutional levels; 4. Improvements or changes are intended to enhance student learning; 5. Improvements or changes reach all students, ensuring equitable access to improvements and success; and 6. Improvements or changes are assessed to determine effectiveness. (Slown, 2019)

With Corin’s guidelines in mind, I share stories and examples of using s­tudent evidence to improve our practices toward student success.

Listening to Faculty After a year of intense assessment efforts with much support and resources from the dean’s office and a specially created form for School of Art programs,

Driscoll et al_Advancing Assessment for Student Success.indb 155

03-06-2021 07:18:40 PM

156  

advancing assessment for student success

faculty at Pratt Institute convened to share their findings and insights about the process. I heard the following: “Students’ achievement of PLO 3 wasn’t strong so we are going to use more reflection processes to develop that learning.” “The data from the internship supervisors was mixed and confusing—we realized that the supervisors had not been given prior information or an orientation about the new assessment process. They were simply asked to fill out evaluations and use the rubrics. We felt horrible and will meet with them next week and work on the new process.” “The student evidence on thesis writing needs improvement. We are considering changing the course from one credit to three credits. And maybe move it to a writing course or a collaborative course with faculty from the Writing Program.” “Student scores on PLO 5 are low so we have initiated a meeting in which we will reevaluate the related course content.” (Pratt Institute, 2018) I could not help but think that the use of an annual reporting form with questions about how faculty used their data led their thinking to proceed to improvement and to begin planning ways to do so. I was also enthused to ­listen further to the way they talked about their processes. Among their reports were descriptions of student engagement and their feedback in the processes: “Students expressed that they would like to learn the theoretical concepts earlier in the curriculum.” “We cocreate and refine our processes with our students.” “We consistently share our program expectations with our students for their discussion and recommendations.” “We are considering drafting a rubric with our students, especially for their capstones.” I wish you could have been sitting with me to hear the reports. You would have heard pride in their efforts and authentic plans for improvement, and witnessed the kind of community that had formed during their efforts. There were no “one person” reports—collaboration was evident even in their presentations. Those reports are assembled in the Pratt Institute (2018).

Driscoll et al_Advancing Assessment for Student Success.indb 156

03-06-2021 07:18:40 PM

using evidence of student achievement  

157

I recently met Jenny Xiang and Kerry Clifford, with Graduate Assessment at the University of California, Merced at an assessment workshop. We talked about the importance of using student evidence to make improvements, and I described a lack of examples at the graduate level. They enthusiastically described and offered to share several scenarios from their campus. The University of California, Merced is a fairly new campus and has been dedicated to student success through careful assessment processes. From the graduate program, we offer examples of programs that use student evidence to make changes or improvements: The Mechanical Engineering Program used direct evidence from students’ self-assessment, part of their annual progress report, to identify a potential need for guidance for beginning students to set realistic expectations for themselves. The program now offers a first- year student mentor program to partially address that need. The Sociology Program found that the rubrics used for annual progress reports didn’t really provide nuanced information to students about their progress. New rubrics were developed with student support and are now implemented. (Clifford & Xiang, personal communication, November 12, 2019)

I chose one more exciting case of using student learning assessment data to improve student learning to close this chapter. I found it inspiring and a very basic example of what is possible when faculty decide to inquire. In 2019, Sarah Dahlen and Ryne Leuzinger, faculty at CSUMB, received an innovation grant to conduct an assessment project designed to improve student performance on two information literacy outcomes. After studying results from the 2017 university learning outcomes assessment, faculty identified two outcomes where there was room for improvement. Dahlen and Leuzinger (2019) worked with a program to implement targeted instruction toward the following outcomes: •• Students will organize, interpret, analyze, and synthesize information from sources to achieve intended purpose; and •• Students will consistently and accurately attribute information to sources. Librarians designed and provided instruction on these topics to all course sections and course instructors reinforced the messages. To measure the impact, faculty compared student work from 2017 and work from 2018/19. The early work was used as a comparison group that did not

Driscoll et al_Advancing Assessment for Student Success.indb 157

03-06-2021 07:18:40 PM

158  

advancing assessment for student success

receive targeted instruction. Student work was assessed by six faculty using CSUMB’s information literacy rubric. The first criterion measures one ­target—synthesizing information from sources—and found a “small but positive effect” from the targeted instruction. Using the criterion of ­academic integrity (citing sources), faculty found a statistically significant difference in the two sets of student work indicating a positive relationship between the instruction intervention and the rubric scores for the target group. Faculty found this project very satisfying and from the results were able to recommend instruction to improve student learning related to two of the university learning outcomes. This example is characterized by collaboration, inquiry into how to improve student performance, resources and other forms of support, and information to be used in future courses dedicated to specific outcomes. Dahlen and Leuzinger were very pleased to report their findings to colleagues. With support, they were able to influence several course offerings and test their instructional change. They could have easily done this project on a smaller scale, but the grant funds supported the larger scale involving more faculty, which added to the credibility of the project. Their experience echoes Linda Suskie’s recommendation for providing grants to faculty whose learning assessment results are not as good as hoped for and in need of improvement. Had this project involved students in collaboration with faculty, it would have been interesting and quite valuable to hear students’ recommendations for improving the learning. They may have suggested a different instructional intervention. We encourage faculty like Dahlen and Leuzinger to consider having students on their team as they continue studying student evidence and working to improve student learning.

Summary In a brief summary, I want to remind you of the importance of creating a culture that supports the work of using student achievement evidence for improvement. It is a culture that values inquiry and engages groups of faculty in posing and answering questions. It is a collaborative culture that engages both faculty and students in the process of analyzing student achievement evidence to determine where improvement is needed. Both groups work well to determine how to improve that achievement. You were provided a taxonomy of possibilities for using student achievement evidence to determine where and what improvement was needed. Or as I suggested, you may want to use the idea of developing an institutional or programmatic taxonomy as a professional development activity.

Driscoll et al_Advancing Assessment for Student Success.indb 158

03-06-2021 07:18:40 PM

using evidence of student achievement  

159

Finally, and of utmost importance, I urged you to collaborate with each other and with students to do this work. Trust the insights of both groups for advice on how to improve. The discussions will be powerful and with great potential for real improvement. I have observed obvious pleasure in those collaborations, so the work didn’t look like a drudgery or compliance. Thanks to the thinking and efforts of so many faculty and students across the country for their guidance and examples in this chapter. I must acknowledge the wisdom of Linda Suskie for ideas and practical advice in this chapter. As we have throughout this book, we relied on many of the outstanding NILOA publications for helpful studies, for descriptions of practices, and for ever-expanding guidance. NILOA efforts remind us that we have a growing and powerful community working to improve our pedagogy, curriculum, and assessment.

References Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy of learning, teaching, and assessing. Anderson Wesley Longman. Blaich, C. F., & Wise, K. S. (2011). From gathering to using assessment results: Lessons from the National Wabash Study (Occasional Paper No. 8). University of Illinois and Indiana, National Institute for Learning Outcomes Assessment (NILOA). Bresciani Ludvik, M. (2018). Outcomes-based program review: Closing achievement gaps in- and outside the classroom with alignment to predictive analytics and performance metrics (2nd ed.). Stylus. California State University, Monterey Bay. (n.d.). Using assessment results. https:// digitalcommons.csumb.edu/ulos/ Dahlen, S., & Leuzinger, R. (2019). Impact of library instruction on the development of student skills in synthesis and in source attribution. Center for Teaching, Learning and Assessment. Eggleston, T. (2020, July). Program review and assessment for continuous improvement: Asking the right questions. (Occasional Paper No. 48). University of Illinois and Indiana University, National Institute for Learning Outcomes. Georgia Tech. (2021). Principles of the Office of Academic Effectiveness. https:// academiceffectiveness.gatech.edu/mission-statement/ Green, K., & Swindell, S. (2018, June). Recognition and rewards: Explicitly valuing faculty roles in assessment. Poster presented at the Association for Assessment of Learning in Higher Education (AALHE), Salt Lake City, UT. Hirashiki, J. (2019). Creativity as a result of cross-disciplinary assessment. Assessment Leadership Academy. Hutchings, P. (2010). Opening doors to faculty involvement in assessment (Occasional Paper No. 4). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Driscoll et al_Advancing Assessment for Student Success.indb 159

03-06-2021 07:18:40 PM

160  

advancing assessment for student success

Hutchings, P. (2019, February). Washington State University: Building institutional capacity for ongoing improvement. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. (2018, January). Assessment that matters: Trending toward practices that document authentic student learning. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Kezar, A. (2018). How colleges change: Understanding, leading, and enacting change. Routledge. Kinzie, J., Hutchings, P., & Jankowski, N. A. (2017). Fostering greater use of assessment results: Principles for effective practice. In G. Kuh, S. Ikenberry, N. Jankowski, T. Cain, P. Ewell, P. Hutchings, & J. Kinzie (Eds.), Using evidence of student learning to improve higher education (pp. 51–72). Jossey-Bass. Kinzie, J., & Jankowski, N. A. (2015). Making assessment consequential: Organizing to yield results. In G. Kuh, S. Ikenberry, N. Jankowski, T. Cain, P. Ewell, P. Hutchings, & J. Kinzie (Eds.), Using evidence of student learning to improve higher education (pp. 73–94). Jossey-Bass. Maki, P. (2017). Real-time assessment: Meeting the imperative for improved time to degree, closing the opportunity gap, and assuring student competencies for 21st-century needs. Stylus. Pratt Institute. (2018, February 3). Pratt School of Art assessment plan and reports. Author. Slowensky, J. (2019, January). The diverse campus: Improving access, equity and the student experience. Workshop presentation for the Western Senior College and University Commission (WSCUC), Pomona, CA. Slown, C. (2019). Using assessment results: Guiding principles. CSU Monterey Bay. https://digitalcommons.csumb.edu/ulos_assessment-results/1/ Suskie, L. (2015). Five dimensions of quality: A common sense guide to accreditation and accountability. Jossey-Bass. Swarat, S., & Wrynn, A. (2018, July). Assessment with benefits: Faculty engagement and community building through GE assessment. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). University of California at Merced. (2013). Principles of assessment. https:// assessment. ucmerced.edu/campus-principles

Driscoll et al_Advancing Assessment for Student Success.indb 160

03-06-2021 07:18:41 PM

7 A D VA N C I N G R E F L E C T I O N Fostering Conversations That Improve Student Success Dan Shapiro

M

aximizing time for reflection throughout the assessment cycle is essential if we want assessment to improve student learning. I learned this lesson in 2010 with the help of Becky Reed Rosenberg, then serving as director of the Center for Teaching, Learning, and Assessment at California State University, Monterey Bay (CSUMB), after Amy and before I held the same position. I was teaching a course meeting our writing and critical thinking general education outcomes. Becky invited faculty teaching courses meeting those general education (GE) outcomes to participate in an institution-level assessment. The assessment was held in early summer, just days after I had submitted my final grades. All student work gathered for the assessment was in hard copy and I sat in a windowless room (albeit with a lovely mural of a Monterey Bay landscape) with the other faculty participating in the assessment. Placed along one wall were boxes filled with student work nicely organized in brown campus mail envelopes. Everything about the process was well thought out and organized for efficient scoring of student work. After a typical norming session, we were turned loose on the papers. The only problem: my brain was fatigued from my end-of-semester grading of student work that ended just days before. It was like starting a second marathon long before my legs had recovered from the previous one. At the end of the day the group briefly discussed our experiences s­ coring student work. What did we see? What did we learn? It was like a shot of adrenaline. I welcomed the opportunity to share our experiences reading and

161

Driscoll et al_Advancing Assessment for Student Success.indb 161

03-06-2021 07:18:41 PM

162  

advancing assessment for student success

scoring student work. Being an interdisciplinary group of faculty, we experienced student work through different perspectives, yet were able to identify common criteria and standards we could use to score student work with satisfying reliability. The next day, after a brief check-in, we went back to work, and the fatigue quickly returned. Like the good assessment practitioner she was, Becky finished the multiday assessment with a survey asking us about our experience, about what we learned, and for suggestions on how to improve the process. I bluntly shared that the brief discussions were energizing, but I found the hours and hours of scoring student work—so soon after finishing the grading of student work from my own courses—exhausting. One year later. I am doing the assessment again, but there is a major difference. Before we began scoring work, Becky noted that some faculty members had complained about the amount and pace of the work, and so this year was going to be a little different. We would assess less work and take additional discussion breaks to reflect together on what we were learning about student performance, about the rubric, about our own teaching, and about improving student learning. The time flew by. I finished the assessment energized, with a deeper understanding of the GE outcomes we were helping students achieve and a list of ideas for how to improve my course assignments and learning activities: •• •• •• •• ••

Break assignments into smaller parts Be more explicit about outcomes and expectations Provide students with a rubric Define key terms for students Create new active learning activities that help students better understand and respond to assignment prompts

I soon learned others in the assessment community were making the case for reflection. Pat Hutchings (2010) listed as one of six strategies for engaging faculty in assessment “create campus spaces and occasions for constructive assessment conversation and action” (p. 15). Several years later, David Roscoe (2017) similarly urged us to create time to talk about assessment and student learning. Roscoe advocated, “Assessment should be about changing what happens in the classroom—what students actually experience as they progress through their courses—so that learning is deeper and more consequential” (p. 14). He continued: [Assessment] data are useful mainly in that they provide the opportunity for faculty to have conversations about improvement. It is the

Driscoll et al_Advancing Assessment for Student Success.indb 162

03-06-2021 07:18:41 PM

advancing reflection  

163

r­ equirement to discuss the data that provides an opening for fruitful dialogue about what is happening in our classes, what our students struggle with, what we are doing that works, and how we might change programs and courses to better steer our students toward the outcomes we are aiming them toward. It is remarkable how much improvement happens when faculty can carve out some time in their busy schedules just to talk about student success. (p. 18)

He even went so far as to claim that “the assessment paradigm has been successful in demanding that [collective conversations about curricula and instruction] take place on a regular basis” and that this is “the most important achievement of the assessment movement” (p. 18). I wholeheartedly agree! But Roscoe also made a controversial claim, one that questioned the usefulness of the kind of assessment work Becky led and that is becoming increasingly common in higher education. Please bear with me while I address this claim as it points to the critical role of collaboration and shared understanding highlighted throughout this book. Roscoe claimed that “assessment data can be either cheap or good, but are rarely both” (p. 16). He questioned whether the kind of “cheap” assessment Becky used to engage me and other faculty can produce data good enough to inform decisions. He argued that rather than having faculty generate local assessment results, faculty should instead rely on “the enormous literature in the scholarship of teaching and learning” produced by “people [who] devote their careers to careful measurement and to the testing of pedagogical research questions using sophisticated multivariate methodologies” (p. 18). Ewell et al. (2017) challenged Roscoe’s concerns about the questionable value of assessment results generated by faculty lacking disciplinary expertise in measuring student achievement and the time and effort required to produce them. They noted that Roscoe advocated for his position “without reference to a shared framework for assessing learning outcomes” (p. 2). Ewell et al. argued that having the kinds of conversations for which Roscoe advocated—which they agreed are needed—will be more effective if they are informed by assessment of student work from their own institutions, guided by shared expectations and criteria for student achievement. Further, they recommended using national models like AAC&U’s VALUE framework (Mc Connell et al., 2019), which can make “cheap” assessment good. This is an approach more and more institutions across the nation are now taking, including my own, as Nelson describes in chapter 5. That is to say, given the right tools, frameworks, and shared expectations for student performance, assessment can be both “cheap” (or at least affordable) and good.

Driscoll et al_Advancing Assessment for Student Success.indb 163

03-06-2021 07:18:41 PM

164  

advancing assessment for student success

Hutchings (2010), Roscoe (2017), Ewell et al. (2017), and I are in full agreement in that no matter how valid and reliable data are, we will not improve student learning if we don’t have conversations that connect what we learn from assessment to how we facilitate student learning. However, making time for those conversations is just the beginning. If we want to maximize the likelihood that those conversations will lead to improved student learning, we need to be strategic about how and when those conversations happen. That is the focus of this chapter. Before we go there, I need to point out three intentional language choices I use throughout this chapter. First, when I use the phrase “improve student learning” I am implying some or all of the following goals: (a) increasing the number of students meeting learning outcomes at the expected level of proficiency (a goal that equity and inclusion imperatives require we prioritize); (b) increasing the number of students meeting learning outcomes at a higher level of proficiency than expected; (c) increasing how quickly students achieve learning outcomes; and (d) increasing student engagement in learning activities. I do not include achieving learning outcomes more easily, because true learning is usually challenging, and that challenge should be celebrated rather than avoided. Second, when I refer to those who facilitate learning, I do not limit that group to faculty. Whenever appropriate, rather than faculty I use we or our to be inclusive of faculty, student affairs professionals, staff, administrators, and students employed at our institutions who facilitate learning for other students (e.g., peer tutors). As Tia Brown McNair and her colleagues (2016) emphasized throughout their inspirational book Becoming a Student-Ready College, everybody who serves at an institution of higher learning is a potential facilitator of student learning. Hopefully you are noticing the similarity of this view to Jankowski and Marshall’s (2017) learning systems paradigm introduced in the preface and discussed further in chapter 8. Third, when I refer to what assessment measures, I use student achievement rather than student learning. Measuring student learning is difficult because it requires that we assess what students know before we start working with them and then measure how much they learn as a direct result of our work with them. I’ve heard many faculty members resist assessment because of that framing and use it to claim that assessment should be left only to those with disciplinary expertise in measuring student learning. (At a plenary session I once attended, the speaker said he responded to this argument by providing “statistical therapy” in which he helped faculty understand that, first, the goal of assessment is not to produce generalizable research for publication in a disciplinary journal; second, assessment is not about proving, it’s about improving; and third, you only need enough data to start a conversation.)

Driscoll et al_Advancing Assessment for Student Success.indb 164

03-06-2021 07:18:41 PM

advancing reflection  

165

The phrase “student achievement” is less problematic, being limited to determining whether students demonstrate the knowledge, skills, and dispositions we expect of them at the times we expect. For helping me stay attentive to this distinction, I thank Nelson, who, in turn, thanks Deborah Brandt, currently professor emerita of English at the University of Wisconsin–Madison, for helping him do the same. That said, I will use “student learning” when referring generally to the goals of higher education. The remainder of this chapter has four sections. First, I explore what constitutes reflection, and how reflection—when the right qualities and attitudes are present—improves learning (for ourselves and for our students). Second, I share results from a study we conducted at CSUMB on how participating in collaborative reflection and assessment impacts faculty thinking and action. Third, I share three institutional examples that illustrate different ways reflection can be weaved into the assessment cycle. Fourth, I share ideas for bringing student voices into the reflective process, describe some institutional examples, and discuss how reflection benefits from attention to diversity and inclusion. Although the ideas and strategies described in this chapter will be of most direct help to those of you who are assessment leaders and faculty developers, they will also help faculty and staff understand the broader purposes and benefits of assessment, as well as foster a student-centered assessment mindset that can make assessment more engaging and meaningful. Further, ideas at the end of this chapter should help all faculty and staff—not just assessment leaders and faculty developers—generate ideas for engaging students in assessment, something that all of the authors of this book are eager to see happen more often.

Unpacking Reflection I wrote this section last because it became clear after Amy, Nelson, and Swarup read an early draft of this chapter that reflection is more complex than I initially described. My coauthors’ critique pushed me to explore more deeply not only what constitutes reflection but also how and when reflection can (and should) be built into the assessment cycle. Of much help was Carol Rodgers’s (2002) insightful paper “Defining Reflection: Another Look at John Dewey and Reflective Thinking.” As you probably guessed from the reference to Dewey, I am going to get abstract here, so please stay with me because these ideas are critical for connecting assessment to improved learning. Further, these ideas will help you understand why assessment works when it does, and why it sometimes does

07_DRISCOLL_C007.indd 165

08-06-2021 01:43:40 PM

166  

advancing assessment for student success

not. As you read, think about assessments you have experienced that have felt more productive, assessments that have felt less productive, and whether the presence or absence of Rodgers’s criteria and attitudes I am about to share might have been influencing factors. Also consider how these ideas relate to our own learning, not just our students’ learning. Pulling from John Dewey’s (1910/1933) How We Think, Rodgers (2002) described four qualities that characterize reflection: 1. Reflection is a meaning-making process that moves a learner from one experience into the next with deeper understanding of its relationships with and connections to other experiences and ideas. 2. Reflection is a systematic, rigorous, disciplined way of thinking, with its roots in scientific inquiry. 3. Reflection happens in community, in interaction with others. 4. Reflection requires attitudes that value the personal and intellectual growth of oneself and of others. (p. 845)

Listening to Faculty How might these criteria be reflected in faculty talk? Making meaning by connecting experiences: “After reading just three or four student papers, I realized that the same problems I was seeing in different papers probably stemmed from the same misinterpretation of the instructor’s assignment prompt. Then I suddenly realized my own assignment prompts—and my students’ responses—had the same problems. I also realized that the way the prompts were worded unintentionally made the task more difficult for some groups of students than others and were likely contributing to achievement gaps among different groups of students.” Thinking systematically and rigorously, with roots in scientific inquiry: “We brainstormed several explanations for why students may be misinterpreting the graph. But before we start making changes, we plan to hold student focus groups to determine which explanations are likely most valid and important, or if there are additional explanations we didn’t consider. We also plan to talk to the learning center student tutors and get their opinions.” Interacting with others: “When you suggested to me that some students performed poorly on the assignment because they weren’t doing the reading, and that they might not be doing the reading because none of their previous instructors had ever taught them how to read a math

Driscoll et al_Advancing Assessment for Student Success.indb 166

03-06-2021 07:18:41 PM

advancing reflection  

167

textbook, that opened my eyes to new teaching strategies I had never considered before. I can hardly wait to share this with my department.” Valuing growth: “Even though I have been teaching for 25 years and been recognized as a gifted and effective teacher, what I learned today made me want to go back and change every course I’ve ever taught. I can’t wait to practice these new strategies and share with our group what happens.” The first criterion—reflection as a meaning-making process that helps us make connections—is core and something that should be apparent in examples throughout this book. It is illustrated in the first scenario when the instructor makes meaning of an observed pattern in student performance and connects it to the design of another instructor’s assignment prompts, the instructor’s own assignment prompts, and broader patterns of student achievement. The second criterion—reflection as a systematic, rigorous, disciplined activity—is typically not associated with reflection, particularly by those who tend to characterize reflection negatively as “soft” or “fluffy.” This criterion is illustrated in the second scenario when the instructor refers to explanations (i.e., hypotheses) for low student achievement, and then proposes methods for collecting additional information to test those hypotheses and inform future action. Right away I got excited about the third criterion—reflection happens in community, in interaction with others—given that collaboration with our colleagues and our students is one of this book’s two central themes. Benefits of collaboration that Rodgers (2002) highlighted include affirming the value of one’s experience (“in isolation what matters can be too easily dismissed as unimportant”), being able to see things in new ways (“others offer alternative meanings, broadening the field of understanding”), and support from and with others to engage in inquiry (“when one is accountable to a group, one feels a responsibility toward others that is more compelling than the responsibility we feel to only ourselves”) (p. 857). This criterion is illustrated, in part, in the third scenario when the instructor—with the help of a colleague—sees a new approach to improving student performance. For the fourth criterion—reflection requires attitudes that value the personal and intellectual growth of oneself and of others—Rodgers identified six attitudes: •• Whole-heartedness: enthusiasm about the topic •• Directness: confidence and trust in one’s experience

Driscoll et al_Advancing Assessment for Student Success.indb 167

03-06-2021 07:18:41 PM

168   •• •• •• ••

advancing assessment for student success

Open-mindedness: willingness to explore alternative ways of thinking Responsibility: believing reflection should culminate in action Curiosity: the desire to understand things that trouble or baffle us Desire for growth: openness to changing what we think and do

This criterion is illustrated in the fourth scenario by the veteran instructor who demonstrates openness to continued growth and enthusiasm for change. Institutional culture is particularly important for fostering these attitudes. For more discussion and many suggestions for what can be done at your institution to foster such a culture, review Amy’s sections on culture and collaboration in chapter 6.

Phases of Reflection The preceding criteria and attitudes are just the foundation for reflection. Rodgers (2002) also pulled from Dewey’s work six phases of reflection that help us understand how to successfully integrate reflection into assessment. Following I list each phase and refer us back to the lessons I learned from Becky that I described at the start of this chapter. 1. Experiencing. Reading and scoring multiple samples of student work was the experience about which my colleagues and I reflected. 2. Interpreting the experience. This refers to initial and “spontaneous” (p. 852) reactions to what we experience. Rodgers and Dewey cautioned that during this stage of reflection, it is important to voice and value all of our initial reactions, but also that reflection should not stop there. The later phases of reflection described in the following should be used to critically explore those initial responses and interpretations. Returning to the assessment Becky facilitated for us, the initial norming and later debriefing sessions created spaces for my colleagues and I to engage in this second phase of reflection. During these sessions we would share our raw experiences, which included things like excitement about the quality of student work; general feelings that something was missing or not quite right, but not being able to name what that was; and surprise or dismay about what students were not doing and feeling sure about the reason why. 3. Naming the problems or questions. This stage is an extension of the previous, which together Rodgers characterized as “observation and description of the experience” (p. 853). Both are about identifying problems, but this phase builds on the previous by explicitly naming specific problems or questions. For example, during our debriefing sessions my colleagues and I named as a problem that students were generally not identifying assumptions at all, and if they did, often not identifying obvious or significant assumptions.

Driscoll et al_Advancing Assessment for Student Success.indb 168

03-06-2021 07:18:42 PM

advancing reflection  

169

4. Generating possible explanations. In this stage, multiple causes for the problems or answers to the questions are generated. For example, during our debriefing sessions, my colleagues and I would share possible explanations for poor performance (e.g., confusing assignment prompts, insufficient or ineffective feedback on prior work, lack of relevant classroom activities needed to foster understanding, lack of foundational knowledge and skills from prerequisite courses). The periodic debriefing sessions while scoring student work were too brief and “in the moment” to get to this level of analysis in a meaningful way. This phase typically occurred during faculty learning communities that both Amy and Becky would arrange following the summer assessments, as well as during other meetings in which assessment results were disseminated and discussed. 5. Creating hypotheses. Rodgers described this phase as “a more intense and focused version of phase four” (p. 854) that involves spending enough time with the data of an experience, with the t­ exture and density and grain of it, so that it can emerge in all its complexity. What might have been a reaction based on a simple-minded analysis (phase two) is thus transformed into a possible reflective response based on full knowledge of its ramifications. (p. 854)

As with the previous, this phase of reflection typically occurred ­during subsequent faculty learning communities and other meetings. 6. Testing/acting. Rodgers noted that educators often stop reflection before this stage, but that for Dewey, it is essential that reflection include action. The creation of CSUMB’s assignment guides (described by Nelson in chapter 5) was an action taken in response to the suggestion that assignments that are more transparent and better aligned to intended learning outcomes improve student achievement. This then prompted more action: the design of additional assignment design tools and workshops.

These six phases can also be illustrated by one of CSUMB’s assessments of oral communication. When creating the oral communication rubric, faculty struggled with how (and sometimes even whether) information literacy was integral to oral communication. These questions became more prominent when faculty assessed student presentations and reflected on how frequently students did not demonstrate even basic information literacy knowledge and skills (Phase 1: experiencing). It was soon apparent that this was because even when oral communication assignment guidelines were provided to students (and they often were not), students were rarely prompted to demonstrate information literacy skills (Phases 2–4: interpreting, naming the problem, generating explanations). Faculty learning communities on oral

Driscoll et al_Advancing Assessment for Student Success.indb 169

03-06-2021 07:18:42 PM

170  

advancing assessment for student success

communication continued to reflect on these issues and possible solutions (Phase 5: hypothesizing). This prompted a productive collaboration between the oral communication and information literacy coordinators that resulted in resources, workshops, and classroom presentations on synthesizing oral communication and information literacy skills (Phase 6: acting). Further, when the institution developed new oral communication GE outcomes and courses, information literacy was given a prominent role. Preliminary assessment data suggests significant improvements in student achievement of these outcomes as a result. With this more nuanced understanding of reflection, let’s explore in more detail what it looks like in practice and examine its impacts.

Impacts of Reflection on Faculty Thinking and Action Not long after my experience with Becky, and while still a faculty member, I was asked to design and coordinate our first institution-level assessment of critical thinking. Our academic senate had just created a standing assessment committee charged with, among other things, responding to our accreditor’s requirement that we assess these five core competencies: critical thinking, information literacy, quantitative reasoning, written communication, and oral communication. I jumped at the opportunity. At this point, the experiences Becky had created for me and my colleagues, fostered by her ability to listen and respond, became an invaluable model: When I set up the assessment schedule, I made sure there was lots of time for reflection. My own experiences doing assessment, and my later experiences facilitating assessment, made clear to me how important reflection is for connecting assessment to improvement. Discussions about undefined terms, the need for scaffolded assignments, hidden expectations, rubrics, criteria, the clarity of assignment prompts, course sequencing, and numerous other topics naturally generated concrete strategies for improving student learning. Subsequently, this approach—of maximizing time for reflection—became a standard assessment practice that our institution still uses today, although with tweaks here and there to make it even better. Our goal has always been to provide faculty positive experiences with assessment and foster an improvement-focused assessment culture. I am deeply grateful to my predecessors, Amy and Becky, for helping me understand this and demonstrating how to put it into practice. In planning our initial assessments of the core competencies, I was certain our first round of institution-level assessment would not produce data meeting professional standards for validity and reliability, as traditionally conceived. At the same time, based on my own experiences doing assessment,

Driscoll et al_Advancing Assessment for Student Success.indb 170

03-06-2021 07:18:42 PM

advancing reflection  

171

I was confident faculty would learn things that would help them improve student achievement, an assumption I wanted to examine. That is, I believed our assessment would have consequential validity, which “posits that assessment must be valid for the purposes for which it is used, and”—this is a key point—“that the impacts or consequences of its use should be factors in determining validity” (Kuh et al., 2015, p. 41). Thus, the task became documenting whether (or not) just participating in assessment helps faculty improve student achievement. Although we do not yet have direct evidence of this impact on student achievement, we did generate promising indirect evidence (Canner et al., 2020). I next describe that evidence and how we generated it. To start, we needed an easy and efficient approach to gathering data on how participating in assessment impacts faculty thinking about facilitating student learning, one that could be integrated into the assessment process without creating significantly more work for faculty. Stephen Brookfield’s work provided an answer. If you ever need inspiration and new ideas, read anything by Stephen Brookfield. His ability to clearly explain complex teaching and learning issues and generate practical and effective strategies for supporting student achievement is extraordinary. One of my favorite tools for improving student engagement and learning is Brookfield’s (2017) Critical Incident Questionnaire (CIQ), which consists of the following five reflection prompts: 1. At what moment did you feel most engaged with what was happening? 2. At what moment were you most distanced from what was happening? 3. What action that anyone (teacher or student) took did you find most affirming or helpful? 4. What action that anyone took did you find most puzzling or confusing? 5. What surprised you the most? (This could be about your own reactions to what went on, something that someone did, or anything else that occurred.)

These reflection questions can be applied to nearly any learning experience, including faculty doing assessment. I had successfully used Brookfield’s CIQ to inform my own work with undergraduates and quickly realized it could be used to generate indirect evidence of what faculty learned from conducting assessments, both to evaluate the effectiveness of our assessment process and to improve it. Each day of assessment concluded with a half-hour discussion followed by 5–10 minutes for faculty to enter their response to the CIQ into an online form. I reviewed their responses after each day, identified themes and important issues, and then shared those themes and issues with faculty the next morning to prompt further reflection.

Driscoll et al_Advancing Assessment for Student Success.indb 171

03-06-2021 07:18:42 PM

172  

advancing assessment for student success

Over a 5-year period, roughly 40 different faculty members generated over a hundred pages of reflections prompted by their experiences in seven different institution-level assessments. A group of us coded the responses and saw evidence that collaboration is important and highly valued and indirect evidence that participation in assessment positively influences faculty work with students (Canner et al., 2020; Table 7.1, this volume). Their responses also provided evidence of the attitudes listed previously that Rodgers claimed are essential for productive reflection. For example, in response to the question, “At what moment(s) during today’s work did you feel most engaged with what was happening,” 80% referred to social interactions. One faculty member responded, “I enjoy collaborating with faculty. It allows for sharing teaching styles, insights, and strategies. It allows me to think more deeply about course and assignment design to benefit student learning.” Further, in a survey that was sent several months to a year or more after the completion of the assessments, the majority of faculty noted that participating in the assessment positively influenced their course syllabus design, assignment design, teaching, and conversations with other faculty about teaching and learning. That is, reflecting on assessment led to action. As you read through the quotes in Table 7.1 and 7.2, you should think back to the work of Jonson et al. (2014) that Amy introduced in chapter 1. Notice how those quotes reflect the instrumental influences (cognitive, affective, and affirmative effects) of assessment on faculty practice. This approach to assessment reflects what Metzler and Kurz (2018) have coined “Assessment 2.0,” in that it emphasizes Ewell’s (2009) assessment for improvement paradigm by focusing on “assessment methods and questions that are meaningful for the faculty who will be conducting the assessment” (p. 17, italics in original). In defining Assessment 2.0, Metzler and Kurz (2018) added, “Another theme of Assessment 2.0 emphasized by other scholars is the key role of discussion, reflection, and collective meaning-making among departmental faculty as they analyze assessment data” (p. 17, italics in original). Table 7.1 shows the results of the coding analysis of daily and final reflections submitted by the 40 faculty members who participated in the seven institution-level assessments from 2014 to 2018. Daily reflections were collected at the end of each day of assessment, and a final reflection was collected at the end of the last day of assessment. Each of the assessments lasted 3–5 days. Table 7.2 shows the results of the coding analysis of responses to a postassessment survey sent to the 40 faculty members who participated in seven institution-level assessments, from 2014 to 2018. Twenty-five (62.5%) of the faculty members responded to the survey.

Driscoll et al_Advancing Assessment for Student Success.indb 172

03-06-2021 07:18:42 PM

Driscoll et al_Advancing Assessment for Student Success.indb 173

03-06-2021 07:18:42 PM

Note. n = 40 faculty members.

What is something you experienced during today’s work that you find most puzzling or confusing.

Rubric

61%

63%

Rubric

80%

74%

Social interaction

At what moment(s) during today’s work did you feel most engaged with what was happening?

Affirmative Responses for Theme (by Percentage)

Norming

Theme

Question(s)

What is something you experienced during today’s work that you found most affirming or helpful?

TABLE 7.1

“We’re still fleshing out the rubric, so there is a [little bit of ] of confusion—but that’s kind of the point of this whole process.”

“Writing assignments and corresponding rubrics can be utilized as powerful teaching tools. The assignments and feedback given to students across this university should be rigorous, clear, and consistent.”

“While we all operate within a certain set of standards, the norming process reminds us that we don’t always share identical values. Deliberately applying a rubric can help to diminish these personal distinctions.”

“I enjoy collaborating with faculty. It allows for sharing teaching styles, insights, and strategies. It allows me to think more deeply about course and assignment design to benefit student learning.”

Representative Quote

Daily and Final Reflections

Driscoll et al_Advancing Assessment for Student Success.indb 174

03-06-2021 07:18:42 PM

TABLE 7.2

72%

88%

92%

80%

Course/ syllabus design

Assignment design

Teaching and other interactions with students

Work or other conversations with faculty/staff

How did participation in the ILO 1 assessment work affect your . . .

Note. n = 25 faculty members.

Affirmative Responses (by Percentage)

Questions

“I’ve become more bold in pushing for greater clarity from my colleagues in their work with students—and for more opportunities for collaboration to achieve more parity in the work we assign.”

“I was able to communicate oral and written communication goals in an informed, confident manner. I particularly appreciated that I was speaking from a platform that included my peers rather than being based only on my personal training and perception.”

“Not only did I change wording in assignments to more directly target the learning outcomes, I also dropped an assignment that did not directly address learning outcomes and added two other assignments that allow students to think more critically and use information literacy skills.”

“I think more about the overall course rather than the assignments first—that is, what do I want my students to get out of the course.”

Representative Quote(s)

Postassessment Surveys

advancing reflection  

175

There are other approaches to generating reflection and collective meaning-making in addition to Stephen Brookfield’s CIQ that you might find helpful for work at your own institution. For example, look at any book on service-learning pedagogy, and you’ll find at least one chapter focused on reflection, and that reflection is threaded throughout all the chapters (e.g., Cress et al., 2013; Eyler et al., 1996; Jacoby, 2014). Popular frameworks include “What? So what? Now what?” (Rolfe et al., 2001) and DEAL (adapted from Ash & Clayton, 2009; Kleinhesselink et al., 2015): •• Describe. Describe what you see in student work. What skills, knowledge, and attitudes are students demonstrating through their work and how are they demonstrating them? What confusions and misconceptions are they demonstrating? •• Examine. Examine your feelings and thoughts. What do your thoughts and feelings imply about how students learn? What do they imply about how you and other educators facilitate learning? What do they suggest about what you, other educators, and students are doing well and could be doing better? •• Articulate learning. What did you/we learn? How did you/we learn it? Why is it important? What will you/we do because of it? What you should aim for are questions that are broad enough to surface multiple perceptions and perspectives while keeping the conversation focused on the evidence of student learning most relevant to the learning outcome being assessed. It is also important to maintain focus on what educators can do differently, individually and collectively, to improve student learning while minimizing, or avoiding altogether, lamenting about things that lay outside our direct influence (e.g., quality of high school courses).

Using Reflection to Advance Teaching and Assessment in the Curriculum and Cocurriculum To illustrate other ways assessment leaders and faculty developers can use reflection to enhance assessment in curricular and cocurricular contexts, I next describe three examples of how reflection was incorporated into institution-level assessment of student achievement: (a) “metareflecting” to promote culturally responsive teaching at Honolulu Community College, (b) building a foundation for assessing integrative knowledge at CSUMB, and (c) developing outcomes and rubrics for assessing cocurricular learning outcomes at the University of San Diego.

Driscoll et al_Advancing Assessment for Student Success.indb 175

03-06-2021 07:18:42 PM

176  

advancing assessment for student success

Metareflecting at Honolulu City College Chiara Logli (ALA) is the institutional assessment specialist at Honolulu Community College (HonCC). Logli (2020) published a gem of an article that describes how HonCC makes use of “‘organic’ initiatives that ‘supplement’ the assessment work already being done” (p. 28). As a result of doing the pilot project described in the paper, Logli learned that “assessment data emerge spontaneously across numerous spaces on campus, can be collected in flexible manners, and can be analyzed through an equity lens in order to support a diverse student population” (p. 28). That is, rather than assessment being an add-on, Logli and her colleagues used what was already happening and explored ways of triangulating different kinds of assessment data to prompt reflection about improving student learning, with explicit attention to diversity and culturally responsive teaching. The assessment data she triangulated were (a) mandated course-level assessment reports that included what kinds of teaching approaches faculty use in their classes; (b) an institution-wide activity on student engagement during which participants reflected on how they can promote student engagement and then provided written feedback at the end of the activity; and (c) multiple discussions about assessment that occurred across campus, including assessment taskforce meetings, an assessment town hall, an assessment showcase, and assessment workshops. Logli noted, These dialogues were not preconceived assessment activities per se; instead I saw them as glimpses into the many faculty conversations that informally take shape across campus about improving student learning and institutional performance. Capturing them was a challenge that was worth facing. (p. 21)

She described all of these activities and their results as previously sitting in “compartmentalized spaces” (p. 20). What Logli reported can be thought of as a metareflective activity that involved monitoring spaces where faculty were reflecting individually and collectively about their courses, student engagement, and what they were learning from assessment for the purpose of identifying and promoting culturally responsive classroom practices. The course assessment data and faculty reflections from the different meetings were coded using criteria for universal design for learning (Rose & Meyer, 2006), which Logli equated with culturally responsive teaching and assessment. Logli then used the coded results to gauge the extent to which faculty are open to and engage in the kinds of culturally responsive practices Amy described in chapter 2, such as providing students with different options for demonstrating achievement of learning outcomes. Logli found that “all

Driscoll et al_Advancing Assessment for Student Success.indb 176

03-06-2021 07:18:42 PM

advancing reflection  

177

three data sources revealed that faculty members diversify their assessment methods within a course, but students are rarely given a choice on how to be assessed” (p. 24). Logli then shared her results with faculty for further reflection, during which faculty shared specific examples of current practices and upcoming plans around culture-based assessment in their classroom, based on what they learned about different assessment methods. They expressed interest in diversifying their assessment methods further and providing choice to students on how to be assessed, as they realized that our diverse population benefits from it. Deeper considerations about student learning and matters of equity lie underneath this combination of assessment methods. (p. 24)

There are several things to note here. The first is the inquiry-driven, evidence-based process Logli used to assess the extent to which faculty employed practices associated with universal design for learning. This was not simply talking generally about teaching. Rather, it involved making meaning out of a theory-based analysis of the work faculty were doing with students in their classrooms and faculty reflections about that work. Second, note that this is not assessment of student achievement, but a valuable assessment of faculty actions that have important impacts on students and their learning. Third, this example highlights that there are likely many places where reflection about student learning is already happening at our institutions and can be weaved into assessment, if we look for them and listen. Similarly, Logli’s work highlights that there are many activities happening at our institutions that might not be perceived as assessment. But if there is a deliberate and thoughtful effort to make those activities visible as assessments, connect them, and reflect on what they reveal, those activities can change institutions.

Reflecting on Integrative Knowledge at CSUMB Joanna “Jo” Morrissey was the integrative knowledge undergraduate learning outcome coordinator at CSUMB. Morrissey facilitated a 3-day exploratory assessment session with faculty that included, but also went far beyond, just scoring student work (J. Morrissey, personal communication, August 6, 2019). To start the assessment, she began by asking faculty to reflect on three questions: 1. What does integrative knowledge mean to you? 2. What does integrative knowledge look like in your teaching? 3. What does integrative knowledge look like in your department?

Driscoll et al_Advancing Assessment for Student Success.indb 177

03-06-2021 07:18:43 PM

178  

advancing assessment for student success

The assessment team then reviewed the faculty assignments and the institutional integrative knowledge rubric (which was adapted from the AAC&U integrative learning VALUE rubric) and were asked to reflect on three more questions: 1. What components of the rubric do the assignments address? 2. What components of the rubric do the assignments not address? 3. Do assignment instructions explicitly state what faculty want students to do?

These questions prompted participants to ask further questions, including the following: 1. Questions about the difference between rubric components: What is the difference between “connection to experience” and “transfer”? 2. Questions about how different levels of performance are distinguished: What’s the difference between “Beginning” and “Developing”? 3. Alignment within the rubric: Advanced-level language does not build on language used in “Developing” and “Proficient.”

They also asked bigger picture questions, such as “How will the university ensure that students are getting taught the things the assessment rubrics imply they should be learning?” and “Does the rubric appeal to all disciplines/ programs?” Informed by these prior reflection activities, the 3-day assessment concluded with faculty self-selecting into subgroups that completed one of the following actions: •• Draft a glossary for the integrative knowledge rubric; •• Create a visual to help others understand what constitutes integrative knowledge; or •• Revise the integrative knowledge assignment guide for faculty, and draft an assessment report to share with the broader campus. In designing this assessment, Morrissey demonstrated how reflection can be integrated into assessment and how assessment can be an engaging, multifaceted activity that can go far beyond just scoring student work with a rubric. All together, my institution’s first years of institution-level assessment made clear the value of building time for reflection before, during, and after assessing student work. In my consultations with colleagues in academic affairs and student affairs, I found myself repeatedly saying, “If your assessment will not prompt discussions that change the way you facilitate

Driscoll et al_Advancing Assessment for Student Success.indb 178

03-06-2021 07:18:43 PM

advancing reflection  

179

learning, the assessment is not worth doing.” Viewing assessment through this lens was often a revelatory moment for many who thought of assessment as being solely the scoring of student work for accountability purposes and not in terms of identifying and implementing strategies for improving student learning. Of course, hand in hand with identifying opportunities for improvement, assessment will also affirm our pedagogy.

Reflecting on Cocurricular Learning at the University of San Diego Margaret Leary (ALA), assistant vice president for strategic initiatives and programs in the Division of Student Affairs at the University of San Diego (USD), led a multiyear, highly collaborative effort that produced an exemplary set of cocurricular learning outcomes and rubrics (M. Leary, personal communication, January 9, 2020). USD’s five cocurricular learning outcomes—Authentic Engagement, Courageous Living, Identities and Communities, Purpose, and Well-Being—and rubrics were developed by teams of faculty, staff, and students in fall 2014 and then revised in fall 2016. In the introduction to their cocurricular learning outcomes and rubrics, they describe their process as follows: Drawing on this research and the 2014 Student Affairs strategic planning process, the Strategic Oversight Committee on Student Success charged a group of faculty, staff, and students from divisions across the institution to (1) articulate learning outcomes that complement the Undergraduate Learning Goals and Outcomes, and integrate the entire student experience; (2) coordinate existing and/or design new, intentional, seamless opportunities for students to achieve those outcomes; (3) incentivize student engagement in the outcomes; and (4) assess the outcomes. (University of San Diego, n.d., p. 1)

USD used a highly collaborative process—coupled with reflection questions to guide the work—to develop the cocurricular learning outcomes and rubrics. In 2014, over 60 faculty, staff, and students together reviewed USD’s vision and mission and then were asked to reflect on two questions: 1. What do we want our students to learn outside the classroom? 2. How is that learning distinct to USD’s mission and values?

The group then engaged in collective brainstorming and organized their responses to the questions into a limited number of categories. Through a subsequent survey to the participants, the categories were ranked and narrowed down to the five outcomes identified previously. Next, after participating in a professional development workshop on creating rubrics

Driscoll et al_Advancing Assessment for Student Success.indb 179

03-06-2021 07:18:43 PM

180  

advancing assessment for student success

and using the AAC&U VALUE rubrics as a model, participants created a rubric for measuring student achievement of each of the cocurricular learning outcomes. Two years later, in 2016, the campus engaged in a second reflective activity designed to make the rubrics more effective and promote their use. “Knowledge communities” were created, again consisting of faculty, staff, and students. This time they were asked to reflect on the following questions regarding each cocurricular outcome: 1. What are the challenges to assessing student learning using the rubric? 2. How can the rubric be improved to address those challenges?

This reflective work was coupled with the gathering and sharing of published research on each of the learning goals. The rubrics were revised accordingly and made available on USD’s website. USD’s current efforts involve developing information and professional development on how to effectively facilitate student learning activities designed to help students achieve the cocurricular learning outcomes. The highly collaborative, cross-divisional nature of these efforts exemplifies both themes of this book: creating connections across assessment, teaching, curriculum, and cocurriculum in collaboration with our colleagues and our students.

Reflecting With Students In reviewing the literature on involving students in assessment, I’ve found that much of it focuses on helping students better self-assess their own and their peers’ achievement or having students work with faculty to improve how those faculty facilitate learning in their own classes (e.g., Cook-Sather et al., 2014, Signorini, 2014; Werder & Otis, 2009). This is very important work with a rich body of literature, including Alverno College’s foundational work on student self-assessment (Alverno College Faculty, 1994, 2000). However, my focus here is on the value of involving students in assessment at the program and institutional levels. That is, how can students help us foster more cohesive learning systems? Here are some ideas: •• Invite students to review a signature assignment used in multiple courses for clarity of purpose, context, task, criteria, and alignment to intended learning outcomes. •• Invite students to assess student work from multiple sections or courses with faculty (if you do this, do make sure to both anonymize

Driscoll et al_Advancing Assessment for Student Success.indb 180

03-06-2021 07:18:43 PM

advancing reflection  

•• •• ••

•• ••

181

work and discuss with the students the importance of maintaining confidentiality). Invite students to department meetings where assessment data are discussed and action plans developed. Invite students to participate in program-level assessment projects. Work with student groups to translate assessment results into student-friendly documents they can discuss and use to brainstorm approaches to make learning activities more effective and cohesive across programs and divisions. Invite students to sit on departmental and institutional assessment committees. Invite students to town halls to reflect on annual assessment results.

University of the Pacific offers a creative approach to this last item.

Engaging Students in Assessment of Core Competencies at University of the Pacific University of the Pacific (Pacific) is evolving a very creative and promising approach through their Annual Core Competencies Forum. These forums are designed “to bring faculty, staff, and students together to discuss how to improve student learning in the core competencies” (University of the Pacific, 2019, para. 1). These forums have been held annually since 2016. This has been a dynamic and evolving event coordinated by different people over the years (J. Low & J. Grady [both ALA], personal communication, December 5, 2019). In some years, to encourage student attendance, each member of the assessment committee has been asked to bring at least two faculty members and at least one student. Staff are also invited to forums and seating is arranged so tables contain a mix of students, faculty, and staff. The 2018 forum focused on quantitative reasoning and information literacy. Participants were given sample questions from a locally developed quantitative reasoning test that had been administered to 100 Pacific seniors and sample questions from the Standard Assessment of Information Literacy (SAILS), a national exam on information literacy administered to over 350 Pacific seniors. Participants in the forum were asked to reflect on the following: 1. How would you answer the question? 2. What percentage of Pacific seniors should be able to answer this question correctly? (This was followed by data on the number of Pacific seniors who had actually answered the questions correctly.) 3. Why might Pacific seniors answer this question incorrectly?

Driscoll et al_Advancing Assessment for Student Success.indb 181

03-06-2021 07:18:43 PM

182  

advancing assessment for student success

I particularly like that last question because of its implications for pedagogy. Group polling was used during the session so responses could be shared and discussed. The 2019 forum focused on oral communication. In that forum, participants were asked to reflect on the following: 1. How well do students deliver classroom presentations? 2. How do employers describe seniors’ representation of themselves in interviews? 3. How are we teaching and assessing oral communication? Does it go beyond the podium and poster presentation?

During this forum, participants, including students, viewed recordings of student presentations and then scored them using the institutional oral communication rubric. Scores were recorded with an online polling application so forum participants could see how they scored relative to others in the room. This helped build a shared understanding of oral communication and how to assess it. Participants were then asked to reflect on what types of oral communication skills they thought were most important for students (e.g., presentations, communicating in teams, interviewing, or other) and what student actions they assess for oral communication (e.g., formal presentations, class discussions, etc.). To help participants transition to the action phase of reflection, these exercises were followed by a panel discussion with faculty who had been identified through review of syllabi as taking creative approaches to teaching and assessing oral communication skills. Those faculty shared their course-specific oral communication rubrics and how they facilitated learning for oral communication skills. One of Pacific’s web pages contains a comprehensive list of action items for all the core competencies (University of the Pacific, 2017), such as the following for quantitative reasoning: •• The subcommittee changed the learning outcome in response to forum feedback to incorporate the theme of “authentic context.” •• The quantitative reasoning subcommittee will oversee the creation of a quantitative reasoning measure that can be administered through canvas. •• The subcommittee plans to use an AAC&U rubric to score quantitative reasoning work samples. (para. 5)

What’s exciting and important to notice about Pacific’s approach is their explicit inclusion of students; involvement of the broader campus, including staff; creative reflection questions that allow for real-time sharing and discussion of responses (which also helps build shared understanding); connecting

Driscoll et al_Advancing Assessment for Student Success.indb 182

03-06-2021 07:18:43 PM

advancing reflection  

183

assessment results to action; and keeping the forums dynamic, interactive, and responsive to participants’ interests and needs.

Engaging Students in Assessing and Revising the Capstone Requirement at CSUMB Moving back to CSUMB, I have a final example to share that is particularly strong with regard to the extent to which students were involved in a comprehensive assessment and revision of a degree program’s capstone model. When revising their capstone requirement, faculty and students in the human communication (HCOM) major worked together to study and improve the program’s capstone requirement (D. Reichard, personal communication, December 17, 2019). With faculty member David Reichard facilitating, students were recruited to develop, participate in, and evaluate a pilot capstone model. The call for student participants invited them to “help design and ‘test run’ the new capstone course as an experimental pilot” and “help generate a report about the pros and cons of the new model for presentation to the HCOM faculty.” In collaboration with Reichard, students were actively involved. They conducted a literature review; interviewed students and faculty; developed and distributed a survey; reviewed other capstone models; and codeveloped, tested, and evaluated a pilot capstone model. The students also enrolled in credit-bearing courses so they could fulfill their own capstone requirement through their participation. Together, the students and Reichard addressed the following research questions: •• What do students think about the capstone experience in HCOM? •• What do faculty members who teach capstone think about the capstone experience in HCOM? •• What do students/faculty want and expect out of a capstone experience? •• How does concentration (e.g., creative writing, journalism) relate to a student/faculty expectation of the capstone? •• How does the capstone have an impact on or relate to the postHCOM experience, including career and graduate school? Examples of responses and recommendations include the following: •• Ensure that students receive proper mentoring. •• Make sure that HCOM students receive proper scaffolding in terms of what is expected in capstone and in documenting the capstone experience and/or process.

Driscoll et al_Advancing Assessment for Student Success.indb 183

03-06-2021 07:18:44 PM

184  

advancing assessment for student success

•• Topically organized seminars around a common theme are a promising model we should explore. •• Account for the role of concentrations in shaping a new model because it is important to many students and faculty. •• Be aware of issues around expertise of faculty in terms of advising and clarity of expectations for students about what concentration means for the capstone. •• Understand that HCOM students would like some choices—in terms of writing intensive research projects or alternatives, including a community-based connection. In a presentation to the faculty, Reichard and the students shared their research findings, described the idea of themes, and recommended a new capstone structure. With full faculty approval, Reichard and the students piloted the new design the following semester. The pilot was a success, and the basic format is still in place today. The department also continues to assess and improve the model, with student input. This is a far different process—and very likely far more effective—than had a faculty committee met a few times over a semester and developed a model with no or minimal student input. This process exemplifies what was in essence a 2-semester-long, carefully designed process of assessment and reflection during which student voices were given significant attention and influence. Embedding the assessment in a capstone course, guided by a wellcrafted line of inquiry, the process created opportunities for students and faculty to reflect extensively and in depth over the 2 semesters during which the pilot program was developed and assessed. Another powerful aspect of this approach was explicit attention to equity and inclusion, which I describe and discuss next.

Equity and Inclusion Every opportunity for reflection in the assessment cycle is also an opportunity to promote equity and inclusion in assessment. The greater the diversity of voices in any stage of assessment—from writing outcomes to evaluating the impacts of changes in practice on student achievement—the more likely assessment will benefit all learners. In fact, the School of Humanities and Communication was explicit about this, noting in their call to students that they were “looking for a diverse group of students to pilot this new model” who “will reflect the diversity of the HCOM program.” As Amy makes so clear in chapter 2, when groups gather to reflect during any stages of the assessment cycle, we need to make sure those groups are as diverse and inclusive as

Driscoll et al_Advancing Assessment for Student Success.indb 184

03-06-2021 07:18:44 PM

advancing reflection  

185

possible. Montenegro and Jankowski (2020) made the case for critical assessment that bridges culturally responsive assessment and socially just assessment in calling for, among other things, “including the voices of students, especially those who belong to minoritized populations or those whose voices can often be left unheard, throughout the assessment process” (p. 9). If you (or others you know) need further convincing, consider reading Scott Page’s (2019) book, The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy. He pointed out that “on the complex tasks we now carry out in laboratories, courtrooms, board rooms, courtrooms, and classrooms, we need people who think in different ways” (p. 14). Page argued for a linkage between cognitive diversity—which he defined as “differences in information, knowledge, representations, mental models, and heuristic”— and “better outcomes on specific tasks such as problem solving, predicting, and innovating” (p. 14). Before I conclude this chapter, I want to share a quote from Logli (2020, p. 22) that I find powerful in how it frames the utility of diversity. Please keep this in mind as you reflect on your own assessment work: Educating for diversity is focused on “discerning how most effectively and sustainably to enable the differences of each to make a difference for all” (Hershock, 2010, p. 38). It means “shifting the locus of concern from how much we differ—from each other to how we might best differ—for one another” (p. 38).

Conclusion There are three things about reflection that I hope this chapter makes clear. First, if we want assessment to improve student learning, reflection must culminate in action. Second, reflection is more than just asking, “What do you think?” The more care and consideration we give to developing good reflection questions and integrating them throughout the assessment cycle, the better. Third, the greater the diversity and number of people at our institutions—including students!—we can engage in reflective practices throughout the assessment cycle, the more likely assessment will motivate collective action that improves student learning. This is challenging work. As Carol Rodgers (2002) so clearly pointed out, the process of reflection, and the steps of observation and description in particular, require the teacher to confront the complexity of students and their learning, of themselves and their teaching, their subject matter, and the contexts in which all these operate. (p. 864)

Driscoll et al_Advancing Assessment for Student Success.indb 185

03-06-2021 07:18:44 PM

186  

advancing assessment for student success

Scaling up efforts from improving courses to improving curricula and the institution as a whole only compounds those complexities. These are messy challenges that can be best and most enjoyably addressed by engaging as many stakeholders as possible in collective reflection and action. There is clearly no “one size fits all” approach to this work. If you think back to previous chapters of this book, you should see multiple opportunities for students, faculty, and staff to reflect throughout the assessment cycle: •• Equity in assessment (chapter 2): reflecting while designing assessments to ensure they are equitable and inclusive •• Learning outcomes (chapter 3): reflecting while writing and revising learning outcomes •• Alignment and coherence (chapter 4): reflecting while developing, analyzing, and/or improving curriculum maps •• Assignment prompts and rubrics (chapter 5): reflecting while creating assignment prompts and rubrics, and reviewing and refining assignments •• Using evidence (chapter 6): reflecting while developing and implementing action plans based on evidence of student achievement. I hope this chapter has sparked ideas about how you can intentionally and effectively weave reflection throughout all assessments at your institution.

References Alverno College Faculty. (1994). Student assessment-as-learning at Alverno College. Alverno College Institute. Alverno College Faculty. (2000). Self-assessment at Alverno College. Alverno College Institute. Ash, S. L., & Clayton, P. H. (2009). Learning through critical reflection: A tutorial for service-learning students. Ash, Clayton, and Moses. Brookfield, S. D. (2017). Becoming a critically reflective teacher. John Wiley & Sons. Canner, J., Dahlen, S., Gage, O., Graff, N., Shapiro, D. F., Waldrup-Patterson, V., & Wood, S. (2020). Engaging faculty in assessment of institutional learning outcomes. Assessment Update, 32(3), 1–16. https://doi.org/10.1002/au.30210 Cook-Sather, A., Bovill, C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. John Wiley & Sons. Cress, C. M., Collier, P. J., & Reitenauer, V. L. (2013). Learning through serving: A student guidebook for service-learning and civic engagement across academic ­disciplines and cultural communities. Stylus.

Driscoll et al_Advancing Assessment for Student Success.indb 186

03-06-2021 07:18:44 PM

advancing reflection  

187

Dewey, J. (1933). How we think. Prometheus Books. (Original work published 1910) Ewell, P. T. (2009). Assessment, accountability, and improvement (Occasional Paper No. 1). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment .org/wp-content/uploads/2019/02/OccasionalPaper1.pdf Ewell, P., Hutchings, P., Kinzie, J., Kuh, G., & Lingenfelter, P. (2017, April). Taking stock of the assessment movement—“Liberal Education,” winter, 2017. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/wp-content/ uploads/2019/08/Viewpoint-Ewelletal.pdf Eyler, J., Giles, D. E., Jr., & Schmiede, A. (1996). A practitioner’s guide to reflection in service-learning: Student voices and reflections. Vanderbilt University. Hershock, P. (2010). Higher education, globalization and the critical emergence of diversity. Philosophical Inquiry in Education, 19(1), 29–42. https://journals.sfu.ca/ pie/index.php/pie/article/view/244 Hutchings, P. (2010). Opening doors to faculty involvement in assessment (Occasional Paper No. 4). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Jacoby, B. (2014). Service-learning essentials: Questions, answers, and lessons learned. John Wiley & Sons. Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Stylus. Jonson, J. I., Guetterman, T., & Thompson, R. J. (2014, Summer). An integrated model of influence: Use of assessment data in higher education. Research & Practice in Assessment, 9, 18–30. https://www.rpajournal.com/dev/wp-content/ uploads/2014/06/A1.pdf Kleinhesselink, K., Schooley, S., Cashman, S., Richmond, A., Ikeda, E., & McGinley, P. (Eds.). (2015). Engaged faculty institute curriculum. Community-Campus Partnerships for Health. Kuh, G. D., Ikenberry, S. O., Jankowski, N. A., Cain, T. R., Hutchings, P., & ­Kinzie, J. (2015). Using evidence of student learning to improve higher education. John Wiley & Sons. Logli, C. (2020, Winter). Culturally Responsive Assessment 2.0: Revisiting the quest for equity and quality in student learning. Research & Practice in Assessment, 14, 19–31. https://www.rpajournal.com/dev/wp-content/uploads/2020/02/A2.pdf Mc Connell, K. D., Horan, E. M., Zimmerman, B., & Rhodes, T. L. (2019). We have a rubric for that: The VALUE approach to assessment. Association of American Colleges & Universities. McNair, T. B., Bensimon, E., Cooper, M. A., McDonald, N., & Major, T., Jr. (2016). Becoming a student-ready college: A new culture of leadership for student success. John Wiley & Sons. Metzler, E. T., & Kurz, L. (2018). Assessment 2.0: An organic supplement to standard assessment procedure. University of Illinois and Indiana University, National

Driscoll et al_Advancing Assessment for Student Success.indb 187

03-06-2021 07:18:44 PM

188  

advancing assessment for student success

I­ nstitute for Learning Outcomes Assessment (NILOA). https://www.learningout comesassessment.org/wp-content/uploads/2019/02/OccasionalPaper36.pdf Montenegro, E., & Jankowski, N. A. (2020). A new decade for assessment: Embedding equity into assessment praxis (Occasional Paper No. 42). University of Illinois  and ­ ­ Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/wp-content/ uploads/2020/01/A-New-Decade-for-Assessment.pdf Page, S. E. (2019). The diversity bonus: How great teams pay off in the knowledge economy. Princeton University Press. Rodgers, C. (2002). Defining reflection: Another look at John Dewey and reflective thinking. Teachers College Record, 104(4), 842–866. https://doi .org/10.1111/1467-9620.00181 Rolfe, G., Freshwater, D., & Jasper, M. (2001) Critical reflection in nursing and the helping professions: A user’s guide. Palgrave Macmillan. Roscoe, D. D. (2017). Toward an improvement paradigm for academic quality. Liberal Education, 103(1). https://www.aacu.org/liberaleducation/2017/winter/ roscoe Rose, D. H., & Meyer, A. (2006). A practical reader in universal design for learning. Harvard Education Press. Signorini, A. (2014). Involving undergraduates in assessment: Assisting peers to provide constructive feedback. Assessment Update, 26(6), 3–13. https://doi .org/10.1002/au.30002 University of San Diego. (n.d.). University of San Diego Co-Curricular Outcomes. https://www.sandiego.edu/student-affairs/documents/assessment/Full_CCLO_ Document_12.12.16.pdf University of the Pacific. (2017). University Assessment Committee responds to feedback from Core Competency Forum. https://students.pulse.pacific.edu/x84794.html University of the Pacific. (2019). 4th Annual Core Competencies Forum. calendar. pacific.edu/event/4th_annual_core_competencies_forum#.Xeen0JNKjow Werder, C., & Otis, M. M. (Eds.). (2009). Engaging student voices in the study of teaching and learning. Stylus.

Driscoll et al_Advancing Assessment for Student Success.indb 188

03-06-2021 07:18:44 PM

8 A D VA N C I N G C O M M U N I C AT I O N Sharing Stories That Improve Student Success Dan Shapiro

U

    sing Evidence of Student Learning to Improve Higher Education (Kuh et al., 2015) has a chapter on communication that begins with a quote by the renowned journalist Sydney Harris that I highlighted immediately: “The two words ‘information’ and ‘communication’ are often used interchangeably, but they signify quite different things. Information is giving out, communication is getting through” (p. 211, italics added). This chapter is about getting through. I use this phrase throughout this chapter, so let me be clear on what I intend it to mean. Getting through indicates that our intended audience understands what we are communicating, and why it is important. If we want to increase our chances of getting through, we must communicate with a clear purpose in mind, and then intentionally and appropriately tailor our communications for our audience. For communicating assessment, the ultimate purpose must always be supporting student learning and success, even when the pathways to those ends are long and winding. Additionally, throughout this chapter I use the phrase communicating assessment as a general term that encompasses the myriad different ways communication occurs throughout the assessment cycle, such as articulating learning outcomes, sharing insights that emerge while scoring student work, writing assessment reports, advocating for recommendations stemming from assessment, and informal conversations informed by experience with any aspect of assessment. I doubt anybody in the assessment community questions the importance of effectively communicating assessment results and the inferences drawn from them—reflected in ubiquitous calls to “close the loop.” But 189

Driscoll et al_Advancing Assessment for Student Success.indb 189

03-06-2021 07:18:44 PM

190  

advancing assessment for student success

not enough of us give sufficient critical thought to how and why we communicate throughout all stages of assessment. That includes contextualizing assessment by describing the unique institutional structures, processes, and cultures in which we conduct assessment and how they influence what we assess and why, an idea that recurs throughout this chapter. At the same time, I am happy to report that many in the assessment community are now calling for advancing communication—and providing much good advice. This chapter has six sections. First, I begin with a vignette to help us all—assessment leaders, faculty developers, faculty, and staff—think holistically about communicating assessment in a context that you might not ­initially perceive as connected to assessment: our day-to-day interactions with students and each other. Second, I share ideas on how we can increase the likelihood our communications will get through by paying attention to our unique institutional and programmatic contexts and tailoring our communications for specific audiences, purposes, and occasions. Third, I present recommendations from leaders in the assessment community for writing assessment reports. Fourth, I describe two examples of institutions that ensure their assessment reports are read and used. Fifth, I explore the following strategy for addressing communication challenges created by the sometimes conflicting assessment purposes of improvement and accountability: rather than just the documentation of student achievement, define accountability as also communicating how we use evidence of student achievement to improve our institutions. The sixth and final section addresses two important questions: How can students help us better communicate assessment and, more importantly, what about assessment should we communicate to our students? Throughout this chapter I rely heavily on the innovative and influential work coming out of the National Institute for Learning Outcomes Assessment (NILOA), for which I am deeply grateful. Like the previous chapter, the ideas and strategies described in this chapter will be of most direct help to those of you who are assessment leaders and faculty developers. But faculty and staff—and even students—will find the ideas and strategies in this chapter essential for keeping assessment focused on student learning and success.

Lessons From Vivian No one has better demonstrated for me the sound of assessment communication getting through than Vivian Waldrup-Patterson, the former assessment, inclusion, and administrative analyst and current director of the Center for Teaching, Learning, and Assessment (TLA) at California State University,

Driscoll et al_Advancing Assessment for Student Success.indb 190

03-06-2021 07:18:44 PM

advancing communication  

191

Monterey Bay (CSUMB). Vivian is CSUMB’s assessment engine. She supports all institution-level assessments of student learning outcomes. That support includes organizing sessions during which faculty refine rubrics, gathering and preparing student work for assessment, monitoring norming and assessment sessions, listening to and participating in faculty conversations while they assess student work, supporting analysis of assessment data, reading assessment reports, brainstorming strategies for improving student learning, and many intangibles, the total of which I know I am not aware. Over the many years I have worked with Vivian, I have observed scenarios like the following more times than I can remember. A student sticks their head in the office asking for the location of a room. Vivian replies, “Oh, that’s down the hall, I’ll walk you there.” As they walk, she immediately strikes up a conversation, asking the student how their day is going, what they are majoring in, how long they’ve been at the institution, and so on. Based on what the student shares, she might tell them about campus resources they may be interested in, student activities and groups they could join, faculty members they should contact, or upcoming campus and community events. She also frequently helps organize and attends student events, interacting with students in the same way, motivated by her genuine interest in their lives and experience at CSUMB and her desire for all students to feel welcome and supported. As a result, students often stop by just to check in, say hi, ask for advice, or share a recent success. Vivian is exactly the kind of caring educator Amy described in chapter 2. She exemplifies how, although not a faculty member with traditional teaching responsibilities, she is part of a broader network of people working together to improve student learning and success. Interactions like these may not appear connected to assessment, but in important ways, they are. Because of her position and responsibilities, Vivian reads assignment guidelines and student work from programs across the institution, participates in evidence-based faculty and staff discussions about student achievement, supports the analysis of assessment data and preparation of assessment reports, and generates strategies for improving student learning. As a result, she maintains a deep understanding of the institution’s vision, mission, values, goals, and learning outcomes, and how to advance them. She is able to use this understanding to tailor her conversations for any purpose, audience, and occasion. When her audience is students, she helps them experience the institution as an integrated learning system within which students, faculty, student affairs professionals, staff, and administrators facilitate student learning and share responsibility for making our institutions more “student-ready” (McNair et al., 2016). Further, she regularly has these same kinds of interactions with faculty and staff. Of course Vivian

Driscoll et al_Advancing Assessment for Student Success.indb 191

03-06-2021 07:18:45 PM

192  

advancing assessment for student success

is an exceptional person and I think few can match her talents (she is also an active community volunteer and organizer, holds an MBA, and has completed a PhD in organizational leadership). An institution can help all its members cultivate their “Vivian-ness” and ability to get through by clearly communicating what we want our students to learn, why we want them to learn those things, the extent to which they are (or are not) learning those things, and evidence-based strategies for how we can better facilitate that learning, together. Effectively communicating assessment is central to fostering institutional Vivian-ness. Jankowski and Marshall’s Degrees That Matter (2017) became a beacon for me from the moment I understood what they meant by a learning system paradigm in which all members of the institution are equally responsible for and contribute to student learning, ideally with a shared understanding of what they want their students to learn and how they collectively facilitate that learning. McNair et al. (2016) made a similar case in their equally inspiring book Becoming a Student-Ready College: A New Culture of Leadership for Student Success. My enthusiasm for the learning system paradigm stems from my unease with the hierarchies that too often exist in institutions of higher education, often manifesting from perceptions that some groups (e.g., faculty) are more responsible for facilitating student learning than others and some types of learning outcomes are more important than others. This often results in some groups being treated as having higher status than others, something most prevalent in common tensions between academic affairs and student affairs (e.g., Keeling, 2004, 2006; Kuh & Banta, 2000; LePeau, 2015; Schuh and Gansemer, 2010). Yet if we want our institution’s learning systems to be better aligned and more coherent (as Amy advocates in chapter 4), we need to think of every employee as an educator. And if we want to collectively better support student learning, we must think about how and why we communicate throughout the assessment process. Whenever we document assessment results, we must think about the broader learning system we want to influence. We must think about the production, dissemination, and use of assessment results as a unifying component of an evolving learning system we are increasingly making more transparent, aligned, coherent, and relevant to an ever-changing world. We need to get through. Communication is key here because shared understanding can only be achieved through effective and ongoing communication among different stakeholders with different backgrounds, interests, priorities, and communication styles. That is no simple task. Communication is highly complex because it is not just one thing, at one level, and at one point in time. Rather, if we want assessment to improve student learning, we must to get through,

Driscoll et al_Advancing Assessment for Student Success.indb 192

03-06-2021 07:18:45 PM

advancing communication  

193

internally and externally, at multiple levels, and throughout the assessment cycle. We must always be asking ourselves, with whom do we need to communicate and for what purpose? Linking back to the two themes of this book, creating connections across assessment, teaching, curriculum, and cocurriculum in collaboration with our colleagues and our students, it is not hard to see the importance of effective communication for fostering both. The remainder of this chapter presents practical suggestions you can use to help everybody at your institution cultivate their inner Vivian-ness. Let’s begin by considering our institutional contexts.

Context Matters No two institutions are the same, nor are any two programs within an institution. The better we understand the unique contexts and cultures of our institutions and their programs—and this is equally important for assessment leaders and all faculty and staff—the more likely our assessment communications will get through and improve student learning. Using Evidence of Student Learning to Improve Higher Education (Kuh et al., 2015) provided insightful, big-picture thinking about the important role communication plays in bridging assessment and student learning, drawing attention to the importance of context. In particular, in the chapter “From Compliance to Reporting to Effective Communication: Assessment and Transparency,” Jankowski and Cain argued for a more expansive view of transparency, one focused on “sharing and using assessment results in consequential ways” (p. 201). They pointed out that colleges and universities “need to effectively communicate and share information with both external and internal audiences in ways that provide context, answer questions, inform decisions, and respond to audience needs” (p. 202, italics added). Jankowski and Marshall (2017) built on these ideas, advocating for a particular approach to communication, which they referred to as augmentation: “a means by which implicit ideas are made explicit and prior conceptions and misconceptions are revealed” (p. 152). They noted the importance of paying attention to how assessment within programs is a highly contextualized activity. Prompting faculty and staff to acknowledge, appreciate, and communicate how assessment is uniquely constructed and implemented within any department should foster a collaborative culture of assessment. Focusing on local context and needs avoids the kind of “McDonaldization” of assessment in which “the standardized process, and the frequency of the demand to provide assessment data, [leads] faculty to become frustrated with, or disengaged from, the process and their own

Driscoll et al_Advancing Assessment for Student Success.indb 193

03-06-2021 07:18:45 PM

194  

advancing assessment for student success

data” (Metzler & Kurz, 2018, p. 11). Jankowski and Marshall suggested departments consider two questions when determining how and what to communicate about assessment: 1. Why do we do what we do to foster and evaluate student learning? 2. How does what we do benefit our students? (p. 155)

Asking these questions, they say, helps faculty unpack their assumptions about their courses and curricula and “puts student learning back at the center of our work” (p. 155). It should also help them understand what is unique about their program and tailor assessment and communications for that unique context. Jankowski and Marshall’s chapter is full of great examples, such as Salt Lake City Community College (SLCC), which posts short videos of faculty explaining how they have used assessment to change courses and programs. For example, in the video Classroom Assessment Becomes Program Assessment, an accounting professor explains how she “turned course assessment results into program assessment information by looking at the same outcomes across courses” (Salt Lake City Community College, n.d.). This video, like others on the website, is clearly tailored to SLCC’s unique institutional context. For example, the speaker first refers to how assessment at SLCC had been done over the past 10 years. She then refers to program-level insights she gained from looking at a particular sequence of SLCC accounting courses, not by how they are sequenced in the curriculum, but by how they align with academic student learning outcomes (ASLOs). The video is short and appropriately does not provide a lot of detail, and so achieves several goals in addition to making it more likely it will be viewed in its entirety: It quickly and effectively provides an example of how to do assessment for improvement, communicates faculty enthusiasm for assessment, and stimulates interest, hopefully prompting listeners at the institution to seek more information about the assessment. Later in this chapter I’ll share additional examples to illustrate these ideas. But before we go there, we need to consider the variety of audiences, purposes, and occasions that should inform how we design and implement our communications.

Audience, Purpose, and Occasion In From Gathering to Using Assessment Results: Lessons From the Wabash National Study, Blaich and Wise (2011) profiled institutions that have demonstrated successful use of assessment results. Not surprisingly, they found that those exemplars had developed “careful communication plans so that

Driscoll et al_Advancing Assessment for Student Success.indb 194

03-06-2021 07:18:45 PM

advancing communication  

195

a wide range of campus representatives have an opportunity to engage in discussions about the data” (p. 3). This is not something that happens easily and without careful thought and planning—but it is essential if we want assessment to improve student learning. What should be obvious is that using assessment results and communication are two sides of the same coin: Successful implementation of strategies that improve student learning requires effective communication. The more institutions can foster a culture that integrates assessment, communications that get through, and action, the better they will be able to create effective learning experiences for their students. Further, it is important that faculty and staff understand and expect that the primary purpose of any assessment communication should be to help them improve their own students’ learning. What is the best way to communicate assessment? Nelson often wears a tie covered in fonts of varying sizes and orientations with the phrase “it depends.” This is also Nelson’s stock response that I have heard him repeat many times to any question that begins with “What should I do when . . . ?” or “What is the best way to . . . ?” It strikes me also as the best response to the question of how best to communicate assessment: It depends, on the context. In helping people discover what the answer to their question depends on, Nelson, who was trained as a rhetorician, invariably starts asking questions about audience, purpose, and occasion. That effective communication requires we understand our audience, purpose, and occasion is not new ­thinking—far from it. Yet it is surprising that when designing and implementing communications—or, for that matter, when designing and implementing assessments—how few of us think deeply about audience, purpose, and occasion, beyond “we have to do this because our chair, dean, provost and/or accreditor demands it.” The better we are able to articulate the audience, purpose, and occasion for any assessment—including during the initial design phase as well as when deciding how to disseminate results—the better we will be able to communicate assessment. Thinking back to the opening vignette about Vivian, one of the reasons she communicates so effectively with students is her clear purpose: supporting students, their learning, and their success. If we want our institutions to be more effective learning systems, improving learning must be the ultimate purpose of any assessment communication effort. That said, if the distance between a specific communication effort and direct interactions between educators and students is large (e.g., when communicating assessment to accreditors), learning-centered communication strategies may not be obvious. My hope is that the ideas I am sharing in this chapter will help you develop learning-centered strategies for communicating assessment for any audience, purpose, and occasion.

Driscoll et al_Advancing Assessment for Student Success.indb 195

03-06-2021 07:18:45 PM

196  

advancing assessment for student success

What are the variety of audiences, purposes, and occasions that might influence how we communicate assessment? Let’s start by revisiting Amy’s list of possible audiences and purposes from chapter 6, to which I’ve added a few more: •• Faculty may use assessment to determine what—and which—­students are and are not learning to focus improvement efforts. •• Faculty may use assessment to determine what they and their programs are and are not doing well to focus improvement efforts. •• Programs may use assessment to make learning outcomes more relevant to student, society, civic, and employer needs. •• Community members may use assessment to decide whether to start or continue collaborating or to improve a collaboration. •• Students and/or their parents may use assessment to determine whether to start or continue attending an institution. •• Program administrators may use assessment to inform employers. •• Funders may use assessment to decide whether to start or continue support of programs or institutions. •• Administrators may use assessment to respond to criticism. •• Boards of directors may use assessment to promote programs or institutions. Deciding what we want to happen as a result of our communication should shape not just what we communicate but also how we communicate. Of course, there are many different occasions during which assessment communication happens, such as, •• •• •• •• •• •• •• •• •• •• •• •• •• ••

academic senate meetings, institutional town halls, faculty and staff forums, faculty learning communities, workshops, college meetings, department meetings, committee meetings, advisory board meetings, chair meetings, dean meetings, cabinet meetings, “water cooler” and “break room” conversations, and advising meetings with students.

Driscoll et al_Advancing Assessment for Student Success.indb 196

03-06-2021 07:18:45 PM

advancing communication  

197

Exigence: Motivating Action Nelson also likes to helpfully point out that an important aspect of occasion is exigence: the urgency that motivates a response. For assessment, that should include the critical question about student learning that initiated and shaped the assessment. Making the critical question explicit and explaining how that question emerged from conversations about student achievement is an important component not only of designing assessment but also determining how inferences based on the review of student work, survey results, and other kinds of data will be communicated, reflected upon, and, ultimately, acted on. Further—and particularly if there is more than one audience, purpose, and occasion—multiple channels of communication may be needed as well as multiple kinds of communication materials. The central piece of advice here is to pause before starting any assessment. Take time to determine your primary assessment goals—your purpose for communicating—and your strategies for meeting them. Also think about exigence: What is motivating the response? Reflect on what you are assessing, who needs to know, what they need to know, why they need to know—and, most importantly, what you hope will happen because they know. Then tailor your communication for your audience and occasion. At the same time, don’t let a lack of clarity prevent you from starting. Your goals and strategies may evolve as you engage in assessment, as the following example illustrates.

Developing and Disseminating an Information Literacy Infographic An interdisciplinary group of CSUMB faculty met to determine the focus of the current year’s information literacy assessment. The conversations quickly turned to experiences suggesting that native students had stronger information literacy skills than transfer students, and whether transfer students would benefit from additional or different support than native students (exigence!). In addition to wanting to communicate what was learned from the assessment to other librarians and the library dean, they also wanted to communicate results to other faculty and staff for the purpose of increasing their ability to support student achievement of the institution’s information literacy outcomes. As the assessment unfolded, they also realized their colleagues at the local community colleges were an additional audience. Because it was too often the case that assessment reports written in the form of a standard research paper were not widely read, the group decided to develop an infographic for broader distribution. Once the scoring of student work and review of various surveys were finished, they consulted with the institution’s data visualization expert, who asked them two key questions: “Who is your audience?” and “What

Driscoll et al_Advancing Assessment for Student Success.indb 197

03-06-2021 07:18:46 PM

198  

advancing assessment for student success

are the key findings you want to highlight?” Answering the latter question was p ­ articularly interesting because the answer evolved as they developed the infographic—even though the assessment had already been completed. (I will note that both questions prompted productive reflection, bringing us back to chapter 7 and highlighting yet another opportunity to weave reflection into the assessment cycle.) Here is the final list of key findings from the infographic which you can download from CSUMB’s Digital Commons (Dahlen, 2020): •• Additional development is needed for both native and transfer students to reach proficiency with information literacy skills at CSUMB. •• Transfer students are more likely to have attended four or more library sessions across all institutions attended. •• The skills transfer students acquired at previously attended institutions are not always transferrable to using the library and conducting research at CSUMB. •• Information literacy skills taught in library information sessions at local community colleges are similar to those taught at CSUMB, but differences in tools, databases, and resources necessitate additional instruction.

The infographic ends by answering the question “How can you support ­students in developing information literacy skills?” with the following: •• Contact the CSUMB librarian for your program to request information ­literacy instruction for your proseminar course or other required course at the beginning of upper-division coursework. •• Intentionally scaffold information literacy instruction into your curriculum so that it appears in a few relevant and required classes throughout the program. Avoid a haphazard approach in which library instruction is requested for only certain sections or for nonrequired courses, as this allows some students to fall through the cracks. •• Be specific in your assignment prompts about what kinds of sources students can use and what they are expected to do with them. •• Let your students know that they can get assistance at the Library Research Help Desk. •• Encourage transfer students to attend CSUMB’s Transfer Student Orientation, which now includes a segment on the library.

After the infographic was finalized, it was shared with the Assessment Committee, the Academic Senate, all librarians, faculty who contributed student work to the project, the deans and the provost, and local community college librarians. The infographic has proven to be a very effective strategy for quickly and effectively communicating assessment and motivating action.

Driscoll et al_Advancing Assessment for Student Success.indb 198

03-06-2021 07:18:46 PM

advancing communication  

199

In addition to communicating with infographics, many in the a­ ssessment community recommend that we think about communicating assessment as an opportunity for storytelling, an idea I turn to next.

Writing Assessment Reports That Get Through This section presents advice from three nationally recognized assessment experts on how to write assessment reports that are more effective, more likely to be read, and more likely to prompt actions that improve student learning.

Telling Our Stories Natasha Jankowski and Gianina Baker, the former director and current acting assistant director, respectively, of NILOA, and their colleagues at NILOA urge us to take a storytelling approach to communicating assessment (Baker et al., 2019; Jankowski, 2019; Jankowski & Baker, 2019). In their presentation to the Association of American Colleges & Universities, Baker et al. (2019, slide 48) shared the following: Assessment as part of our story: Makela and Rooney (2012) write of telling a story—that assessment “is essentially a process of telling a story about our people, programs, and services” that are told to many different people, in many different ways, with many different foci. They argue that the “storyline surrounding an assessment ultimately aims to include enough evidence to make well-reasoned assertions . . . ” (p. 2).

How do we do this? To help us answer this question, the NILOA website has a user-friendly (and free and downloadable) resource, Building a Narrative via Evidence-Based Storytelling (Jankowski & Baker, 2019). You should download and read the entire document as it is full of great ideas and recommendations that will help you get through. I share some highlights to whet your appetite in the following list. The document contains prompts and suggestions for addressing the ­following elements of assessment storytelling: •• •• •• •• ••

Evidence-based storytelling development Tips for report writers Evidence-based storytelling handouts Narrative peer-review process Feedback sheets (for providing peer feedback on assessment reports)

Driscoll et al_Advancing Assessment for Student Success.indb 199

03-06-2021 07:18:46 PM

200  

advancing assessment for student success

What you should notice about these elements is that they go beyond simply writing a standard report. For example, in advocating—and outlining a process—for peer reviewing assessment reports as they are being developed, Jankowski and Baker provide a mechanism for including more perspectives and voices in those communications. Also notice that the peer-review process itself becomes yet another productive layer of communication, with its own audience, purpose, and occasion (think back to chapter 7—this is a reflection opportunity!). As I will discuss at the end of this chapter, inviting students into the review process undoubtedly will increase the quality and accessibility of assessment reports. One need only look at Jankowski and Baker’s (2019) framework for developing an assessment report and guiding questions to realize theirs is a different, engaging, and innovative approach: •• •• •• •• •• •• •• ••

Audience, Argument, Evidence, Story and language, Characters, Plot, Data visualization, and Awareness and discovery. (pp. 4–7)

For example, for “story and language” Jankowski and Baker prompted us to reflect on these questions: “What kind of story are you telling? (i.e., compliance, improvement, loss, struggle, quest, tragedy, fantasy, etc.)? Is your audience interested in that type of story? What context is needed for readers to understand the story? What is the setting?” (p. 5). For “characters,” they prompted us with these great questions: “Who are the character(s) in your story? (Is there a protagonist in your story—someone who is driving the action and/or someone with whom your audience is likely to identify? What are the motivations of the characters?” (p. 5). You get the idea. Although the process of creating an engaging and effective assessment story may sound like extra work, it won’t be wasted work, as far too often happens when generic assessment reports are uploaded to assessment management systems and viewed only (if viewed at all) so a box can be checked for compliance purposes. Well-written reports will be read by more people and will be more likely to get through. Producing clear reports resulting from an authentic, meaning-making processes are essential to fulfilling our responsibility to improve student learning. In that sense, the time needed to produce an effective assessment report should not be considered extra work, but rather, required work.

Driscoll et al_Advancing Assessment for Student Success.indb 200

03-06-2021 07:18:46 PM

advancing communication  

201

A Communication Lesson From Linda Suskie Only Linda Suskie could read a New York Times editorial on gun control and pull from it powerful lessons on communicating assessment. Do read her full blog post (Suskie, 2017) as well as the opinion piece that inspired it (Kristof, 2017). I will share a few highlights and Suskie’s insights. She began her blog post by clarifying that she is not advocating for any particular stance toward gun control, but on how well the author, Nicholas Kristof, triangulated and presented different kinds of data to make his case. In doing so, Suskie identified strategies Kristof used that transfer well to assessment: •• Focus on using the results rather than sharing the results, starting with the report title. Kristof titled his piece “How to Reduce Shootings” and not something like “What We Know About Gun Violence.” Similarly, assessment reports can be titled something like “Improving Students’ Ability to Identify Assumptions” rather than “What We’ve Learned About Student Achievement of Critical Thinking.” •• Focus on what you’ve learned from your assessments rather than the assessments themselves. Every subheading in Kristof ’s article states a conclusion drawn from his evidence. There’s no “Summary of Results” heading like what we see in so many assessment reports. •• Go heavy on visuals, light on text. Aim for a fast read, with points literally jumping out at the reader. On the same note, aim for graphs rather than tables of data. •• Pull together disparate data on student learning. Kristof and his colleagues pulled together data from a wide variety of sources. The visual of public opinions on guns, toward the end of the article, brings together results from a variety of polls into one visual. Yes, the polls may not be strictly comparable, but Kristof acknowledged their sources. And the idea (that should be) behind assessment is not to make perfect decisions based on perfect data but to make somewhat better decisions based on somewhat better information than we would make without assessment evidence. So if, say, you’re assessing information literacy skills, pull together not only rubric results but relevant questions from surveys like NSSE [National Survey of Student Engagement], students’ written reflections, and maybe even relevant questions from student evaluations of teaching (anonymous and aggregated across faculty, obviously). •• Get students involved. I don’t have the expertise to easily generate many of the visuals in Kristof ’s article, but many of today’s students do, or they’re learning how in a graphic design course. Creating these kinds of visuals would make a great class project. But why stop student

Driscoll et al_Advancing Assessment for Student Success.indb 201

03-06-2021 07:18:46 PM

202  

advancing assessment for student success

involvement there? Just as Kristof intended his article to be discussed and used by just about anyone, write your assessment report so it can be used to engage students as well as faculty and staff in the conversation about what’s going on with student learning and what action steps might be appropriate and feasible. Suskie’s last suggestion, “Get students involved” (a topic I will return to at the end of this chapter), is one I strongly recommend and connects strongly to that important theme of this book. I appreciate that Suskie focused our attention on how data are presented. More and more institutions in all sectors of society are paying more attention to this important component of communication, with more and more of them hiring data visualization specialists. The information literacy infographic discussed previously is one such case. This topic goes far beyond the goals of this chapter. However, like so many of the topics I’ve touched upon in this chapter, there are excellent resources already out there for us. One frequently recommended is Stephanie Evergreen’s outstanding work. She has a very informative website (stephanieevergreen.com/) that includes an active blog. For a deeper dive, read her excellent book, Effective Data Visualization (Evergreen, 2019). Evergreen writes, “I wrote this book to make your data stories shine” (p. 9), and she delivers. The unique organization of the book makes it extremely user-friendly with chapter titles like “When a Single Number Is Important: Showing Mean, Frequency, and Measure of Variability”; “How Are We Better or Worse Than a Benchmark: Displaying Relative Performance”; and “Reporting Out: Sharing Your Data With the World.” Further, each chapter begins with a set of learning objectives, such as these from the chapter on benchmarks: •• List multiple methods for showing progress against a benchmark, •• Choose a method for displaying a benchmark that fits your needs for precision and context, •• Understand how adding a benchmark increases interpretability of the data, and •• Add performance data and visuals to your existing graphs (p. 89).

The last thing we want is for our good assessments—and the actions they might inspire—to get lost in a sea of uninterpretable tables. So as you develop your assessment stories, also pay close attention to the ­pictures— they can make the difference between getting through and not getting through. Next I highlight two institutions whose communications are ­getting through.

Driscoll et al_Advancing Assessment for Student Success.indb 202

03-06-2021 07:18:46 PM

advancing communication  

203

Institutions That Are Getting Through When communicating assessment, it is important to think beyond just creating assessment reports, but to also develop strategies for ensuring those reports are read and motivate action. In the following I share two examples from institutions that do just that followed by more helpful guidance from NILOA.

Washington State University (WSU) I have always admired the work of Pat Hutchings, and I credit her with providing me and countless others with the moral and intellectual support—through her writings and inspiring presentations—to focus our careers on teaching, learning, and assessment, something I initially did not see as appropriate for a graduate student at a large, research-focused university. So I was not surprised that Hutchings’s (2019) excellent NILOA case study on WSU contained many assessment communication gems, one of the biggest being what a WSU dean referred to as “an economy of ­sharing” (p. 3). In this case study, Hutchings described the careful and deliberate process of building an assessment-for-improvement culture at a large, diverse, and decentralized multicampus system with a culture that values independence and limited top-down control (note the attention to context here). WSU established the Office of Assessment of Teaching and Learning (ATL), where “one of the most valued services is assistance with data analysis and visual presentation” (p. 3). She went on to note that as “studies of the assessment of student learning outcomes have made clear, getting data is much easier than making sense of and acting upon it,” a challenge that is exacerbated for disciplines that place more value on qualitative data (p. 3). WSU has an impressive website (atl.wsu.edu) that provides practical resources for individuals and programs wanting to do assessment that improves student learning. As I perused their website, I was happy to see that the resources in the “Assessment Data Analysis Toolkit” contained sections asking practitioners to define the purpose, context, and audience for their communication efforts. WSU’s economy of sharing is all about communication and action. The reality is that for a large institution, assessment is not just “one and done” (p. 3). Rather, building a culture of assessment that improves student learning is an ongoing and continuously evolving effort that requires multiple levels of communication (sharing) for a multitude of purposes. Some communications occur through assessment reports and associated documents, whereas other communications occur face to face during institutional, college, and

Driscoll et al_Advancing Assessment for Student Success.indb 203

03-06-2021 07:18:46 PM

204  

advancing assessment for student success

department meetings. Amy’s chapter 6 on using evidence of student achievement provides more details. What the WSU case study does so well—although without explicitly naming it—is highlight the many communication pathways needed to foster a culture of assessment for improvement, communication pathways that include, but also go far beyond just disseminating assessment reports. For example, ATL has an infographic that shows “how we can help you” and a list of clearly articulated assessment roles and responsibilities for department chairs/directors, faculty assessment coordinators, faculty participants in assessment, deans and associate deans, and ATL itself. They also produce and publish institution-level summaries of program assessment reports such as the WSU-Wide Summary, 2019 Undergraduate Degree Program Assessment Reports (Washington State University, 2019). This report is full of informative visuals, like the frequencies of different kinds of responses to assessment results (e.g., assignment development or revision, new course development, changes in advising, faculty/TA professional development, and continuance of effective practices). The more institutions can conceptualize communication as “an economy of sharing” the better they will be able to use assessment to improve student learning.

California State University, Fullerton Another institution with an “economy of sharing” is California State University, Fullerton. Su Swarat (ALA) is the associate vice president for institutional effectiveness at California State University, Fullerton. As I was developing this chapter, I had a hunch she would be a good person to talk to about communicating assessment results, and I was not disappointed (S. Swarat, personal communication, December 5, 2019). To start, Fullerton’s format for program assessment reports is impressively simple and effective. Most reports are a brief two or three pages, with the following categories: 1. Student learning outcomes 2. Methods and measures 3. Criteria for success 4. Results 5. Improvement actions

Showcase reports are available online to internal and external audiences (Fullerton, n.d.a). I encourage you to peruse them; they are interesting and very readable. But Fullerton does not stop there. After program reports are submitted, the institution also produces an annual University Assessment

Driscoll et al_Advancing Assessment for Student Success.indb 204

03-06-2021 07:18:46 PM

advancing communication  

205

Report—a meta-analysis of the institution’s program-level assessments. The report includes the following: •• A list of all of the institution’s faculty and staff assessment liaisons (with photos) who guide assessment work in the colleges and divisions •• Statistics on submissions rates (100% in 2017/18!) •• What kinds of outcomes were assessed •• How many of those outcomes were aligned to university undergraduate and graduate learning goals •• An evaluation of the quality of the assessment reports across programs (aggregated results only) •• Examples of best practices from across the institution •• Institution-level goals for improving assessment practices In addition, they hold an annual assessment forum, which is an “event focusing on campus-wide assessment that brings faculty and staff together to share best practices, engage across disciplines to learn from each other’s experiences, and promote additional assessment activities and opportunities” (Fullerton, n.d.b, para 4). Are those forums effective and do they get through? You can form your own judgment because Fullerton also publishes forum survey results on their website. Their 2019 forum had 79 attendees, of whom 52% responded to the survey. Ninety-three percent of the respondents said the forum increased their knowledge of effective assessment practices. Takeaways included the following: •• •• •• •• ••

I need to consider cultural relevance in my assessments. Increased awareness of equity issues in assessment. I also liked hearing the presentations of assessments. Assessment doesn’t need to be burdensome. For most valuable aspect: Talking with, hearing from faculty about their assessment methodologies.

The materials and resources on Fullerton’s assessment website are worth spending some time with. What I found particularly exciting was Fullerton’s transparency: they are comfortable sharing their assessments broadly, confident in their knowledge that they are using assessment results—whatever they may be—to improve student learning. Further, they examine their institutional assessment processes and identify strategies for improving their ability to support program-level assessment. Additionally, because all of the information and resources described are publicly available on Fullerton’s website, their economy of sharing extends far beyond their institution.

Driscoll et al_Advancing Assessment for Student Success.indb 205

03-06-2021 07:18:46 PM

206  

advancing assessment for student success

NILOA’s Transparency Framework Now is a good time to mention NILOA’s transparency framework for communicating assessment, which features both WSU’s and Fullerton’s assessment websites. NILOA describes this framework as follows (NILOA, 2019, para. 1): The Transparency Framework addresses the question: How might assessment of student learning efforts be made more visible? One avenue adopted by many campuses is to share relevant information about student learning on the institutional website. Just as making student learning outcomes more transparent is a work in progress, so is this Framework. The Framework is not a checklist to be followed but rather a guide to suggest priorities and possibilities with an eye toward communicating meaningful information about student learning that will be useful to various audiences in an online format. An institutional website that is transparent conveys information of student learning in a clear and coherent manner to a target audience. The Transparency Framework provides guideposts to consider in online communication.

The transparency framework is useful for communicating and—as more institutions implement the framework—understanding, comparing, and contrasting different institutional contexts. NILOA recommends presentation of information that is clearly worded, prominently posted, updated regularly, and receptive to feedback. They advocate for communicating the following on institutional websites: student outcomes statements, assessment plans, assessment resources, current assessment activities, evidence of student learning, and use of student learning evidence. NILOA’s transparency framework website contains links to many institutional examples that are worth perusing. Some are more advanced than others, and some are clearly updated more regularly, but all contain good examples that can help any institution better communicate assessment. WSU’s website is particularly strong with regard to communicating how they use assessment results (many other institutions still had this page identified as “under construction”). Before we turn our attention to students, I want to take some time to discuss a significant challenge to assessment communication created by the ongoing tension between communicating for improvement and communicating for accountability.

Communicating for Improvement and Accountability In their 2019 AAC&U presentation, Baker et al. (2019) asked a great question: “As learning organizations, should colleges and universities share

Driscoll et al_Advancing Assessment for Student Success.indb 206

03-06-2021 07:18:47 PM

advancing communication  

207

their improvement stories?” Like them, I believe the answer is yes. When communicating assessment, there are two seemingly distinct purposes we know all too well: (a) improving our ability to support students’ intellectual growth, sense of place and purpose in the world, career readiness, and ability to contribute to their communities and broader society and (b) demonstrating to accreditors and those paying for education (state and federal governments, students and their families, donors, etc.) the extent to which we are fulfilling our institutional missions. Different audiences, different purposes, different occasions. Can the same assessments be used to satisfy both? Again, I believe the answer is yes. All of us—and particularly those of you involved in your institution’s accreditation work—can resolve the tension between communicating for improvement and communicating for accountability if we define accountability as communicating to internal and external audiences how we use evidence of student achievement to improve our institutions (this should remind you of Su Swarat and Fullerton’s work described previously). Pat Hutchings’s (2019) Washington State University case study mentioned previously directly raises the tension between improvement and accountability. She noted that following praise for WSU’s assessment of student learning outcomes, WSU’s accreditors requested that in the future WSU take assessment communication further by reporting “‘student learning outcomes (rather than the process of assessing student learning outcomes)’ and asking that summary achievement data from degree programs serve as a university metric” (p. 7). Hutchings continued by noting that, “like many other institutions today, WSU is seeking ways to better communicate what students are learning to various stakeholder groups and decision makers within and beyond the institution.” But WSU’s administrators see the challenge. One noted, “We’re going to have to figure out a way to meet expectations,” but still maintain buy-in from faculty. Another administrator wants “to avoid a system which encourages programs to plan assessment that emphasizes success at the expense of genuine inquiry” (p. 7). Then Hutchings described the heart of the challenge: To support continual improvement, assessment needs to be relatively nimble, driven by faculty questions, innovative and responsive to changes in students, disciplines, and society (i.e., local context is important). To support accountability, assessment needs to be stable and focused on long-term trends. (p. 7)

This tension is real and creates challenges for determining both how to design assessments and communicate results.

Driscoll et al_Advancing Assessment for Student Success.indb 207

03-06-2021 07:18:47 PM

208  

advancing assessment for student success

In Degrees That Matter, Jankowski and Marshall (2017) advocated for assessment that, as Hutchings described, is “nimble” and “driven by faculty questions.” They noted that in writing about assessment they intentionally do not use the word results because “assessment ‘results’ imply an end, whereas meaningful assessment is an ongoing process of curiosity and question-asking about learning” (p. 154). They continued: Assessment teams need more than results. Communicating how the program assesses and why provides context that can frame discipline-specific strategies. Explaining what program faculty are learning through their assessment processes about how students are learning will develop a richer understanding than the all-too-typical “are our students successfully meeting the outcomes” approach. (p. 154)

This helped me see how to negotiate the real and ongoing tension between improvement and accountability: We can be accountable, first and foremost, by communicating how programs and institutions use assessment to improve their ability to facilitate student learning. That is, rather than only communicating the proportion of students demonstrating proficiency—a sometimes perfunctory exercise that too often is reported as “we and our students are doing great”—we should be communicating how we identify and study where we and our students struggle and how we use what we learn to improve our institutions and programs. If we want to improve learning for all students, institutions of higher education should focus more on communicating how we use evidence of student learning to improve our ability to facilitate it and less on trying to prove a sufficient number of our graduates meet some minimum level of achievement. That is not to imply we should ignore the latter—we absolutely need to know if we are not helping enough of our students achieve expected proficiencies. Rather, it should complement fostering the development of “learning institutions,” which David Chase (ALA), WSCUC’s vice president for educational programs, defined in a workshop as “[a]n institution that focuses on a holistic, developmental trajectory of improvement over time in an intentional and integrated way” (Chase, 2017). One of our accreditation liaisons once said to us, “If you are being compliant, you are not being compliant.” I interpreted this to mean that rather than checking boxes, being accountable can mean designing assessments in response to specific, locally developed, educator-defined questions about student learning; using assessment to improve how we collectively facilitate student learning; and then communicating to multiple audiences the many ways in which we are learning institutions. Similarly, Jankowski and Marshall (2017) pointed out that

Driscoll et al_Advancing Assessment for Student Success.indb 208

03-06-2021 07:18:47 PM

advancing communication  

209

NILOA (2016) advocates that institutions should “focus on improvement and compliance will take care of itself ” (p. 149). In his workshop, Chase (2017) followed his definition of a learning institution with a quote from Bolman & Deal (2017) that stuck with me because it highlighted how assessment, in its most productive form, can be a highly collaborative process, within and between levels: Leading a learning institution is “a subtle process of mutual influence fusing thought, feeling, and action. It produces cooperative effort in the services embraced by both leader and led” (p. 338). With regard to fostering a learning institution, this framing of leadership made sense to me across all levels: between students and facilitators of their learning (whether they be faculty, student affairs professionals, staff, community partners, etc.); between learning facilitators and their program leaders; between program leaders and their administrators; and between administrators and accreditors. Communicating for improvement and for accountability align when our purpose is to help all ­stakeholders— from students to accreditors—understand and see how institutions of higher education use assessment to improve student learning. I like this approach to resolving the tension, but I know it’s not that simple. Those who hope for communication of authentic, embedded assessments and valid, reliable data that allow for the evaluation and comparison of multiple institutions will not be satisfied. I am not necessarily opposed to that goal. Ideally it allows institutions to learn from each other and improve all of higher education, which ultimately benefits all students. Perhaps except for pockets of peer institutions that have been successfully advancing assessment for some time, higher education overall is not quite there yet. Institutions, their programs, and the ways faculty and staff assess student learning are far too diverse. Pushing too fast toward standardization risks dampening enthusiasm for assessment given our highly diverse higher education ecosystem. Given current realities, we can best support student learning within and across institutions by effectively communicating the myriad ways we use evidence of student learning to continuously improve our institutions. Let’s next explore how we can better support this goal by involving students in communicating assessment.

Including and Serving Students When thinking about including students in assessment, there are two questions we should ask ourselves: How can student helps us better communicate assessment and what about assessment should we communicate to students. I address each of those questions in the following section.

Driscoll et al_Advancing Assessment for Student Success.indb 209

03-06-2021 07:18:48 PM

210  

advancing assessment for student success

How Can Students Help Us Better Communicate Assessment? It goes without saying that if we want assessment to be more equitable, ­inclusive, and culturally responsive, we must include a diverse array of student voices during all stages of assessment, a point highlighted in chapter 2, chapter 7, and throughout this book. There are many ways students can inform communication efforts. For example, we can ask students •• •• •• •• ••

what they think we should assess; how they think we should assess; what they want to know about assessment; to participate in the analysis and interpretation of assessment results; to provide feedback on the clarity and effectiveness of assessment reports before they are finalized (such as during the assessment report review process NILOA recommends); •• to share and discuss assessment results with their peers in a variety of contexts (e.g., student government); and •• to provide advice on helping all students better understand and communicate what they have learned as a result of completing our institution’s courses and programs. Two strong examples of involving students in both conducting and communicating assessment come from North Carolina A&T State University (A&T) and Lebanon Valley College. A&T created the Wabash-Provost Scholars Program (WPSP), which received much attention (e.g., Baker, 2012; CookSather et al., 2014; Kuh et al., 2015). This program trained undergraduate students to conduct assessments, including student focus groups (WabashProvost Scholars Program, n.d.). A new focus was chosen for each year. After completing the work, the students summarized the results, analyzed the data, developed recommendations, and presented them to faculty, the provost, and the chancellor, as well as to the broader campus community. Reports produced by WPSP students were posted on the institution’s website and include assessments of faculty-led and student-led supplemental instruction, an assessment of the institution’s intellectual climate, student experiences in math classes, student attitudes toward diversity and inclusion, and a courseredesign project, the results of which were highlighted in an Inside Higher Ed article (Straumsheim, 2016). The other example comes from Lebanon Valley College. In a NILOA Viewpoint, Bringing Student Voices to the Table: Collaborating With our Most Important Stakeholders, Damiano (2018) described a number of ways students have been included in institutional assessment. The Lebanon Valley College example stood out to me because of its creative

Driscoll et al_Advancing Assessment for Student Success.indb 210

03-06-2021 07:18:48 PM

advancing communication  

211

approach to communicating assessment. The institution’s prior assessment of its institutional learning goals suggested continued opportunities for improving integrated knowledge, quantitative reasoning, problem-­ solving, and intercultural c­ ompetence—despite previous efforts to address shortcomings in these areas. What they decided to do next was invite students to review the assessment results and share their own interpretations, which included the following: “Students and faculty were using different operational d ­ efinitions for ‘integrative knowledge’”; “pedagogies most commonly used at the institution potentially prevent students from effectively developing certain learning outcomes, such as problemsolving”; and “faculty may unintentionally be communicating a message to students that is inconsistent with the institution’s mission, values, and learning goals” (p. 3). When it came time to present the students’ findings to faculty, rather than a standard PowerPoint presentation, “the material was presented in a video where students themselves narrated certain experiences they had or witnessed at Lebanon Valley College that might explain the results” (p. 6). The videos had a “profound and visible impact on faculty” (p. 6) and resulted in the creation of professional development opportunities for faculty.

What About Assessment Should We Communicate to Our Students (What Do Students Want and Need to Know)? There is another vital aspect of communication—one more directly ­connected to student success—to which we all need to pay more attention. Kuh et al. (2015) described it like this: Being transparent with students may include communicating expectations and outcomes, alerting students to the intent of the curriculum or general education courses, explaining what is done with the information gathered, and helping students understand the outcomes of a degree so they can effectively convey their learning internally and externally. (p. 21, italics added)

In Degrees That Matter, Natasha Jankowski shared an anecdote that highlights the importance of communicating to students what we want them to learn and why. She described one of many such conversations she has had with fellow airline passengers who claimed, “I didn’t learn anything in college,” and then followed with a long list of the things, including “soft skills,” they did learn while attending college, “in spite of ” what their institutions did (Jankowski & Marshall, 2017, p. 4). Jankowski later

Driscoll et al_Advancing Assessment for Student Success.indb 211

03-06-2021 07:18:48 PM

212  

advancing assessment for student success

lamented, “We never told them what we were striving for” (p. 157). That is, there is more to be gained from effectively communicating assessment than institutional improvement. Assessment communication should also help students see for themselves and share with others the value of what they learn and can do as a result of attending any of our institutions of higher education. One can easily imagine course or program learning outcomes written such that they can be readily transferred to a résumé and used for interview talking points (and backed with evidence demonstrating proficiency). For example, the following are CSUMB’s learning outcomes for Business Administration (BA): •• Understand and apply the terminology, concepts, theories, and tools of management; •• Produce business writing that meets professional standards; •• Prepare and deliver professional presentations; •• Produce a critical analysis of a business scenario; and •• Analyze data using quantitative tools to support business analysis. (CSUMB, 2019, para. 5)

Or imagine how these learning outcomes from Los Positas College (an institution highlighted on NILOA’s Transparency Framework website) would look on a résumé: Upon completion of the Certificate of Achievement in Technical Theater, students are able to . . . •• Analyze elements of a theatrical design; •• Complete basic hand sewing tasks and read and execute a pattern for costume construction; •• Hang, cable, and focus stage lighting and be able to read lighting plots and related documents; •• Perform as a member of a show running crew in various capacities, such as stagehand, light or sound board operator, or wardrobe assistant; •• Read construction plans and construct common stage scenery such as flats, platform, and stairs; and •• Research, plot, and design costumes for use in production. (Los Positas College, 2020, p. 8)

I am certain Swarup will agree that programs can go a long way toward improving student learning and success by writing learning outcomes that

Driscoll et al_Advancing Assessment for Student Success.indb 212

03-06-2021 07:18:48 PM

advancing communication  

213

clearly communicate to students and their future schools or employers what students will know and be able to do upon graduation, statements they can transfer directly to their résumé.

Conclusion Bringing all this back to the two themes of this book—creating connections across assessment, teaching, curriculum, and cocurriculum in collaboration with our colleagues and our students—it is easy to see how effectively communicating assessment is required for all the following to happen: •• Supporting all students: communicating an understanding of equity in assessment and how to manifest its benefits (chapter 2) •• Writing effective student learning outcomes: communicating what we want students to know and be able to do (chapter 3) •• Advancing alignment and coherence: communicating how at each of our institutions, assessment, teaching, curriculum, and cocurriculum stick together. (chapter 4) •• Improving assignment prompts and rubrics: communicating what we want our students to do as well as how and why (chapter 5) •• Using evidence of student achievement: communicating evidencebased ideas about what we do well and what we can do differently to improve student learning (chapter 6) •• Advancing reflection: communicating our thoughts, ideas, and inferences throughout all stages of the assessment process (chapter 7) It is important that we think about assessment communication as more than producing assessment reports. Import communications occur throughout the assessment cycle. As the vignette about Vivian at the start of this chapter illustrates, we should always strive to communicate in ways that get through and help students experience our institutions as supportive and cohesive learning systems. Achieving this requires all members of a learning institution engage in and tailor assessment communications for strategically chosen audiences, contexts, and purposes, with our ultimate goal always in mind: improving student learning and success. The assessment movement is advancing nicely, and . . . there is still much good work for us to do: better involving students throughout the assessment cycle, using assessment to foster more integrated learning systems, and ensuring all assessment is equitable and inclusive.

Driscoll et al_Advancing Assessment for Student Success.indb 213

03-06-2021 07:18:48 PM

214  

advancing assessment for student success

References Baker, G. R. (2012). North Carolina A&T State University: A culture of inquiry. ­University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomeassessment.org/ documents/NCAT2.pdf Baker, G. R., Jankowski, N., Kinzie, J., & Kuh, G. (2019, January). Communicating the value of higher education through evidence-based storytelling. Presentation at the Annual Meeting of the Association of American Colleges & Universities, Atlanta, GA. https://www.learningoutcomesassessment.org/wp-content/ uploads/2019/08/AACU2019-Evidence-Based-Storytelling.pdf Blaich, C., & Wise, K. (2011, January). From gathering to using assessment results: Lessons from the Wabash National Study (Occasional Paper No. 8). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/wp-content/ uploads/2019/02/OccasionalPaper8.pdf Bolman, L. G., & Deal, T. E. (2017). Reframing organizations: Artistry, choice, and leadership. Jossey-Bass. Chase, D. (2017, November). The learning institution: What it is and why it is important. [PowerPoint slides]. The Learning Institution: Aligning and Integrating Practices to Support Quality. WSCUC workshop, San Francisco, CA. Cook-Sather, A., Bovill, C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. Jossey-Bass. CSUMB. (2019). Business Administration, BS. Learning Outcomes. CSUMB Catalog 2019/20. https://catalog.csumb.edu/preview_program.php?catoid=2&poid= 237&returnto=107 Dahlen, S. P. (2020). Information literacy at CSU Monterey Bay. https://works .bepress.com/sarah-dahlen/12/ Damiano, A. (2018, April). Bringing student voices to the table: Collaborating with our most important stakeholders. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learning outcomesassessment.org/wp-content/uploads/2019/08/Viewpoint-Damiano.pdf Evergreen, S. D. (2019). Effective data visualization: The right chart for the right data (2nd ed.). Sage. Fullerton. (n.d.a). Academic assessment showcase: Assessment and institutional effectiveness. California State University, Fullerton. http://www.fullerton.edu/data/ assessment/showcase/ Fullerton. (n.d.b). Past events: Spring 2019 university assessment forum. California State University, Fullerton. http://www.fullerton.edu/data/workshops/ Hutchings, P. (2019). Washington State University: Building institutional capacity for ongoing improvement (NILOA Case Studies). https://www.learningoutcomes assessment.org/wp-content/uploads/2019/08/WSUCaseStudy.pdf Jankowski, N. A. (2019, October 13–15). Examining our narratives through ­evidence-based storytelling. Presentation at the National Assessment Institute, Indianapolis, IN.

Driscoll et al_Advancing Assessment for Student Success.indb 214

03-06-2021 07:18:49 PM

advancing communication  

215

Jankowski, N. A., & Baker, G. R. (2019). Building a narrative via evidence-based storytelling: A toolkit for practice. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learning outcomesassessment.org/wp-content/uploads/2019/10/EBST-Toolkit.pdf Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Stylus. Keeling, R. P. (2004). Learning reconsidered: A campus-wide focus on the student experience. National Association of Student Personnel Administrators and American College Personnel Association. Keeling, R. P. (2006). Learning reconsidered 2: Implementing a campus-wide focus on the student experience. American College Personnel Association (ACPA), Association of College and University Housing Officers–International (ACUHO-I), Association of College Unions–International, (ACUI), National Academic Advising Association (NACADA), National Association for Campus Activities, (NACA), National Association of Student Personnel Administrators (NASPA), and National Intramural-Recreational Sports Association. Kristof, N. (2017, November). How to reduce shootings. New York Times. https:// www.nytimes.com/interactive/2017/11/06/opinion/how-to-reduce-shootings. html Kuh, G. D., & Banta, T. W. (2000). Faculty-student affairs collaboration on assessment: Lessons from the field. About Campus, 4(6), 4–11. https://doi.org/ 10.1177/108648220000400603 Kuh, G. D., Ikenberry, S. O., Jankowski, N. A., Cain, T. R., Hutchings, P., & ­Kinzie, J. (2015). Using evidence of student learning to improve higher education. John Wiley & Sons. LePeau, L. (2015). A grounded theory of academic affairs and student affairs partnerships for diversity and inclusion aims. The Review of Higher Education, 39(1), 97–122. https://doi.org/10.1353/rhe.2015.0044 Los Positas College. (2020). Current SLO listings. http://www.laspositascollege.edu/ slo/slo_list.php Makela, J. P., & Rooney, G. S. (2012). Learning outcomes assessment step-by-step: Enhancing evidence-based practice in career services. National Career Development Association. McNair, T. B., Bensimon, E., Cooper, M. A., McDonald, N., & Major, T., Jr. (2016). Becoming a student-ready college: A new culture of leadership for student success. John Wiley & Sons. Metzler, E. T., & Kurz, L. (2018). Assessment 2.0: An organic supplement to standard assessment procedure. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learning outcomesassessment.org/wp-content/uploads/2019/02/OccasionalPaper36.pdf NILOA. (2016, May). Higher education quality: Why documenting learning matters. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/ documents/NILOA_policy_statement.pdf

Driscoll et al_Advancing Assessment for Student Success.indb 215

03-06-2021 07:18:49 PM

216  

advancing assessment for student success

NILOA. (2019). Transparency framework. University of Illinois and Indiana ­University, National Institute for Learning Outcomes Assessment. https://www. learningoutcomesassessment.org/ourwork/transparency-framework/ Salt Lake City Community College. (n.d.). Examples of excellence. http://www.slcc .edu/assessment/examples-of-excellence.aspx Schuh, J. H., & Gansemer, A. M. (2010, December). The role of student affairs in student learning assessment (Occasional Paper No. 7). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/documents/Student AffairsRole.pdf Suskie, L. (2017, November). What can an article on gun control tell us about creating good assessment reports? https://www.lindasuskie.com/apps/blog/show/44917739what-can-an-article-on-gun-control-tell-us-about-creating-good-assessmentreports Straumsheim, C. (2016). Breaking the “Iron Triangle.” Inside Higher Ed. https:// www.insidehighered.com/news/2016/08/17/flipped-classroom-project-northcarolina-greensboro-produces-promising-results Wabash-Provost Scholars Program. (n.d.). The Wabash-Provost Scholars Program at NC A&T State University. https://wpscholars.wordpress.com/ Washington State University. (2019). WSU-wide summary, 2019 undergraduate degree program assessment reports. https://ace.wsu.edu/2019/10/23/wsu-widesummary-of-2019-undergraduate-degree-program-assessment-reports-available/

Driscoll et al_Advancing Assessment for Student Success.indb 216

03-06-2021 07:18:49 PM

ABOUT THE AUTHORS

Amy Driscoll retired from California State University, Monterey Bay, as the founding director of the Center for Teaching, Learning, and Assessment and from Portland State University as Director of Community/University Partnerships. For the last 11 years, she has coordinated and taught in the Assessment Leadership Academy, a yearlong program for faculty and administrators, and consulted nationally and internationally. She coauthored Developing Outcomes-Based Assessment for Learner-Centered Education: A Faculty Introduction with Swarup Wood (Stylus, 2007). Amy can be reached at [email protected] Nelson Graff currently serves as professor and director of Communication Across the Disciplines, teaching first-year reading/writing and supporting ­faculty around CSUMB in teaching reading and writing in their classes. Before that, he was an associate professor of English education at San Francisco State University, preparing future secondary English teachers. Nelson can be reached at [email protected] Dan Shapiro currently serves as the interim associate vice president for academic programs and dean of University College and Graduate Studies at California State University, Monterey Bay (CSUMB). He has worked at CSUMB since 1997, beginning as a lecturer and as a faculty associate in the Center for Teaching, Learning, and Assessment (TLA)—when Amy Driscoll was the director. He started serving as director of TLA in 2014. He is a graduate of the WSCUC Assessment Leadership Academy (ALA, Cohort VII) and coordinates the ALA project mentoring program. Dan can be reached at [email protected] Swarup Wood is professor of chemistry and currently serves as interim director of general education and coordinator of the First Year Seminar at California State University, Monterey Bay, where he has worked since 1997. He coauthored Developing Outcomes-Based Assessment for Learner-Centered Education: A Faculty Introduction with Amy Driscoll (Stylus, 2007). Swarup can be reached at [email protected]

217

Driscoll et al_Advancing Assessment for Student Success.indb 217

03-06-2021 07:18:49 PM

Driscoll et al_Advancing Assessment for Student Success.indb 218

03-06-2021 07:18:49 PM

INDEX

AAC&U (Association of American Colleges and Universities), 22, 50, 80, 109, 163, 179–180 academic student learning outcomes (ASLOs), alignment within, 194 accountability, 68–69, 206–209 accreditation, 4–5, 7, 62, 135 Adelman, C., 33, 50, 51–52, 54, 60, 109 affective effects, 11 affirmation effects, 11 agency, 34, 38 Aligning Educational Outcomes and Practices (Hutchings), 80 alignment: within academic student learning outcomes (ASLOs), 194; care within, 91; coherence within, 78; collaboration within, 91; concept of, 15–16; to connections for coherence, 91–92; of courses, 76–77, 78; curriculum mapping for, 78–86; defined, 56, 77; holistic, 91; importance of, 90–91; of pedagogy, 78 alignment mapping, 76–86 Allen, Mary, 80 Alverno College, 180 American Historical Association, 9 analysis, as cognitive domain, 51 Anderson, P., 111, 114 annual reporting form, 156 application, as cognitive domain, 51, 52

The Art of Changing the Brain (Zull), 66 ASLOs (academic student learning outcomes), alignment within, 194 assessment: as cheap, 163; classroom-based, 137; cycle of, 32–33; design of, 34–36; diversity within, 184–185; equity in, 9–10, 21–22, 184–185; inclusion within, 184–185; inquiry-driven evidence-based process of, 176–177; McDonaldization of, 193–194; pace of, 144; purpose statement of, 138–139; purposes within, 140; questions regarding, 53; reflection on experiences of, 108–110; scaffolding within, 113; scheduling of, 13, 29; storytelling within, 199–200; why of, 5–6. See also communicating assessment Assessment 2.0, 172 assessment-for-improvement culture, 203 Assignment Charette Toolkit, 112–113, 122, 125 assignments: Assignment Charette Toolkit, 112–113, 122, 125; audience within, 115; backwards design within, 53, 113; blog, 94–95; decisions associated with, 42; defined, 111; design of,

219

Driscoll et al_Advancing Assessment for Student Success.indb 219

03-06-2021 07:18:50 PM

220  

index

34–36; effectiveness of, 111–114; example of, 36; faculty role of, 115–123; genres within, 120; guide for, 116–120, 131–133; improvement strategies within, 162; learning points from, 128; listening to directions within, 94–95; meaning-making tasks within, 112; prompts within, 109, 110–122; qualities of, 112, 115; role of, 108–109, 120; student feedback from, 123–124; student learning within, 110; supporting materials for, 116–117; transparency within, 115; writing, 111–112, 113, 115. See also prompts, assignment Association of American Colleges and Universities (AAC&U), 22, 50, 80, 109, 163, 179–180 audience, 115, 191–192, 194–196, 197–198 authentic problem, 112, 115 backwards design, within assignments, 53, 113 Baker, Gianina, 199–200, 206–207 Banta, Trudy, 10, 17 Barkley, E. F., 94 Bean, J. C., 112, 115, 120, 123 Becoming a Student-Ready College (McNair), 164, 192 belonging, 26, 42–43 Bensimon, E. M., 40 Bitzer, L. F., 120 Blaich, C., 41, 194–195 blogs, 94–95 Bloom, B. S., 51, 144 Bolman, L. G., 209 brains, learning experiences of, 66

Driscoll et al_Advancing Assessment for Student Success.indb 220

Brandt, Deborah, 165 Bringing Student Voices to the Table: Collaborating With our Most Important Stakeholders (Damiano), 210–211 Brookfield, Stephen, 171 Brooks, Jackie, 39, 40 Building a Narrative via EvidenceBased Storytelling (Jankowski and Baker), 199–200 Cain, T. R., 16, 193 California State University, Fullerton, 204–205 California State University at East Bay (CSUEB), 124–125 California State University Monterey Bay (CSUMB): capstone requirement assessments of, 183–184; curriculum mapping within, 78–86; endof-semester approaches by, 31; faculty co-ops within, 143; integrative knowledge reflection at, 177–179; oral communication assessment within, 169–170; reading/writing-enriched curriculum project within, 125; reflection and inquiry within, 12; SMART framework within, 60; student evidence use within, 157–158; Vivian WaldrupPatterson at, 190–193; website of, 91 capstones, 88, 103, 183–184 caring, within taxonomy of significant learning, 52 caring classrooms, community building to, 27–28 caring community, building of, 25–31

03-06-2021 07:18:50 PM

index  

Carnegie Mellon University, 142, 143 Carter, Michael, 59 CAS (Council for Advancement of Standards), 83 CECE (Culturally Engaging Campus Environments) survey, 40 Chapman University, 150 “Characteristics of Effective Outcomes Assessment,” 17 Chase, David, 208, 209 cheap assessment, 163 CIQ (Critical Incident Questionnaire), 171 clarity, requests for, 30 Classroom Assessment Becomes Program Assessment (SLCC), 194 classroom-based assessment, 137 classroom design, 24–25, 48–49 classroom practices, 154 classrooms, listening within, 94 Clifford, Kerry, 157 CLOs (course learning outcomes), 65–67, 69–72 closing the loop, 10, 136–137 cocurricular learning outcomes, 78, 179–180 cocurriculum, reflection within, 175–180 coding analysis, 172–173 cognitive diversity, 185 cognitive effects, 11 coherence, 15–16, 78, 91–99 collaboration: competition versus, 1; faculty/faculty, 33, 148–152, 172; faculty/student, 152–153; importance of, 17, 172; knowledge communities as, 180; within learning outcomes, 69–72; reflection within, 167,

Driscoll et al_Advancing Assessment for Student Success.indb 221

221

179–180. See also professional development collection, forms of, 39–41 common meanings, within outcomes, 33–34 communicating assessment: defined, 189; getting through, 189, 190–193; presentation techniques of, 201–202; purposes of, 207; storytelling approach within, 199–200; student involvement within, 209–213; to students, 211–212. See also assessment communication: for accountability, 206–209; across the disciplines, 126; audience and, 194–196, 197–198; complexities of, 192–193; context within, 193–199; defined, 189; exigence within, 197; for improvement, 206–209; mechanism for, 200; occasion of, 194–196; pathways for, 204; presentation techniques of, 201–202; purpose of, 194–196; role of, 193; significance of, 16; to students, 211–212; transparency framework for, 206 communities, faculty learning, 12–13 communities of practice, 142–143 community, 25–31, 167 compartmentalized spaces, 176 competition, collaboration versus, 1 compliance approach, 5, 6 composition studies, assignments for writing within, 111 comprehension, as cognitive domain, 51 concrete experience, learning and, 66 Condon, W., 125, 128

03-06-2021 07:18:50 PM

222  

index

connections, 52, 91–92, 93, 166, 167 consequential validity, 171 constructive conversations, 152 contemporary life, as curriculum source, 98 content knowledge, procedural knowledge versus, 59 context, importance of, 193–199 continuity, 99 conversations: from assessments, 108; consideration within, 58; constructive, 152; engaging design of, 60; rubrics within, 58–59; student learning improvements and, 164; tailoring of, 191–192 Cooperative Learning Center, 123 core competencies, student assessment engagement within, 181–183 “Core Learner-centered Commitments,” 90 Cormack, Judy, 150 Council for Advancement of Standards (CAS), 83 Council of Writing Program Administration (CWPA), 111 Courage to Teach (Palmer), 1–2 course connections, webbing strategy for, 93 course learning outcomes (CLOs), 65–67, 69–72 courses, alignment of, 76–77, 78 Critical Incident Questionnaire (CIQ), 171 critical thinking, 63, 65 critiques, study of, 13 CSUEB (California State University at East Bay), 124–125

Driscoll et al_Advancing Assessment for Student Success.indb 222

CSUMB. See California State University Monterey Bay (CSUMB) cultural lens of outcomes, 33 Culturally Engaging Campus Environments (CECE) survey, 40 culturally responsive assessment, 32 culturally responsive teaching, 30–31 Culturally Responsive Teaching (Gay), 27 culture, 6, 138–140, 141, 193–199, 203 curiosity, as reflection attitude, 168 curriculum: alignment of, 78; coherent, 96–97; content of, 42; continuity within, 99; development questions for, 97; integration within, 99; learning outcomes focal points within, 61–69; mapping within, 77; organization criteria for, 99; prerequisites within, 100; reflection within, 175–180; scaffolding/sequencing of, 99–104; sources of, 97–99 curriculum mapping: for alignment, 78–86; faculty role within, 86–88; as passive process, 86; student role within, 88–90; trust and respect within, 83, 86; value expansion of, 86–90 CWPA (Council of Writing Program Administration), 111 Dahlen, Sarah, 157–158 data: disaggregation of, 40; disparate, 201; meaning making within, 14; student achievement, 10, 138–140,

03-06-2021 07:18:50 PM

index  

144–145, 154, 155, 164–165, 201; use of, 40 Deal, T. E., 209 DEAL framework, 175 Degree Qualifications Profile (DQP), 55, 59–60, 80, 110 Degrees That Matter: Moving Higher Education to a Learning Systems Paradigm (Jankowski and Marshall), 91–92, 192, 208, 211–212 Delpish, Ayesha, 42 Dewey, John, 165, 166, 168 digital commons, website for, 126 digital resources, 126 directness, as reflection attitude, 167 disaggregation of data, 40 disciplines, learning outcomes within, 59 disparate data, 201 diversity, 184–185 The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy (Page), 185

Documenting Learning With ePortfolio: A Guide for College (Light et al.), 102 DQP (Degree Qualifications Profile), 55, 59–60, 80, 110 Driscoll, Amy, 48 dynamics of influence, power and privilege within, 41–44 economy of sharing, 143, 203–205 Effective Data Visualization (Evergreen), 202 effects (affective, affirmation, cognitive, and instrumental), 11 efforts, focus on, 56

Driscoll et al_Advancing Assessment for Student Success.indb 223

223

Eggleston, T., 141 ePortfolios, 101–102 equity: in assessment, 9–10, 21–22, 31–36, 184–185; consideration of, 56–57; gap elimination within, 57; goal of, 40 Essential Learning Outcomes, 80 evaluation, 1, 51, 140. See also assessment Evergreen, Stephanie, 202 evidence, 39–41. See also student achievement data Ewell, Peter, 4–5, 7, 17, 111, 134, 163, 164, 172 exigence, 120, 197 expectations, importance of, 55 experience, 66, 167, 168 Eynon, Bret, 102 faculty: assessment ownership by, 8, 15; assessment pressure on, 138; assessment viewpoint of, 5; assignment prompts by, 115–123; characteristics of, 27; classroom practices of, 154; collaboration by, 33, 148–152, 172; communities of practice within, 142–143; curriculum redesign and, 96–97; as influencers, 138; inquiry process of, 141–142; learning communities, 12–13; listening to, 5–7, 36–39, 86–88, 92, 155–158, 166–168; reflection impacts on, 170–175; resistance by, 2, 4–5, 6, 13; student collaboration with, 152–153 faculty development, 56–60, 137 fairness, achievement of, 29 feedback: assignment design process and, 123–124; from faculty, 5–7,

03-06-2021 07:18:50 PM

224  

index

36–39, 86–88, 92, 155–158, 166–168; from students, 103–104; unseen, 31 feeding, belonging within, 58 Fink, Dee, 51, 52 Flash, Pamela, 125 food insecurity, 28 forums, 181–183, 205 foundational knowledge, within taxonomy of significant learning, 52 From Gathering to Using Assessment Results: Lessons From the Wabash National Study (Blaich and Wise), 194–195 gallery learning experience (GaLE), 103 Gambino, Laura M., 102 Gay, Geneva, 9, 22, 26, 27 genres, 113, 120 Georgia Institute of Technology (Georgia Tech), 138–139 getting through, 189, 190–193, 199–202, 203–205 goals, 50 Goodfellow, J., 83 grades, learning emphasis and, 1 grading, White norms within, 57 Graff, Nelson, 94–95, 142 Grant, Kevin, 26 grants, 140 group work, 43 growth, 167, 168 Guiding Principles for Using Evidence to Improve Student Learning (Slown), 155 Gutterman, T., 11 Hakel, Milt, 99 Halpern, Diane, 99

Driscoll et al_Advancing Assessment for Student Success.indb 224

Harris, F., 40 Harris, Sydney, 189 hierarchies, within higher education, 192 high expectations, importance of, 55 High-Impact ePortfolio Practice: A Catalyst for Student, Faculty, and Institutional Learning (Eynon and Gambino), 102 high-impact practices, 37, 102–103 Hirashiki, Jennifer, 62, 152 holistic alignment, 91 Honolulu City College, 176–177 How We Think (Dewey), 166 human dimension, within taxonomy of significant learning, 52 Hutchings, Pat: quote of, 7, 77, 110, 112–113, 123, 143, 152, 162, 207; viewpoint of, 164; work of, 62, 80, 83, 203, 207 identities, of students, 22–25 ILOs (institutional learning outcomes), 61–62, 83 improvement, communication for, 206–209 inclusion, in assessment, 184–185 influence, 10, 12 influencers, faculty as, 138 infographic, information literacy, 197–199 information, defined, 189 information literacy assessment, 116, 158, 197–199, 201–202 Inoue, Asao, 57 inquiry, 12–14, 141 inquiry-driven evidence-based process of assessment, 176–177

03-06-2021 07:18:50 PM

index  

institutional culture, student achievement evidence and, 138–140 institutional learning outcomes (ILOs), 61–62, 83 institutional supports, 43 institutions, hierarchies within, 192 instrumental effects, 11 integration, 52, 99 integrative knowledge, 177–179 intellectual skills, 109 interactive writing process, 111–112, 113 International Journal of ePortfolios, 102 issue/problem rubrics, 119, 121, 131 Jack, Anthony Abraham, 28 Jankowski, Natasha: quote of, 56, 193, 194, 208–209, 211–212; viewpoint of, 16, 33, 92; work of, 69, 164, 185, 192, 199–200; writings of, 91–92 Jonson, Jessica, 10, 11 Kezar, A., 143 Kinzie, J., 142 knowledge, 51, 59 knowledge communities, 180 Kristof, Nicholas, 201–202 Kuh, G., 101, 211 Kurz, L., 172 Labissiere, Yves, 21, 23–24, 44 LaGuardia Community College (LGCC), 101–102 language, within outcomes, 33–34, 141 Lazier, Ben, 36, 42, 100 leadership, framing of, 209

Driscoll et al_Advancing Assessment for Student Success.indb 225

225

LEAP (Liberal Education and America’s Promise), 55, 80–82 learning activities, improvement strategies within, 162 Learning Assessment Techniques (Barkley and Major), 94 learning experiences, 95–99, 102–103 learning goals, learning outcomes versus, 50 learning how to learn, within taxonomy of significant learning, 52 learning institutions, 208, 209 learning outcomes: accountability of, 68–69; applying resources for developing, 52–55; cocurricular, 78; collaboration within, 69–72; competition within, 68–69; course, 65–67; creating, 33–34; defined, 50; disciplinary, 59; editing, 33–34; faculty development and, 56–60; focal points within, 61–69; institutional, 61–62; learning goals versus, 50; program, 62–65; reworking of, 60; SMART, 53–55; student affairs and, 67–68; threshold concepts within, 68; updating of, 60; use of, 49; verbs within, 50–52, 70–71; White racial habitus within, 57 learning outcomes assessment, 7–9, 10 learning styles, 22–25, 42 learning system, elements within, 91–92 learning system paradigm, 192 Leary, Margaret, 179–180 Lebanon Valley College, 210–211

03-06-2021 07:18:50 PM

226  

index

Leuzinger, Ryne, 157–158 LGCC (LaGuardia Community College), 101–102 Liberal Education and America’s Promise (LEAP), 55, 80–82 Logli, Chiara, 176–177, 185 Los Positas College, 212 Ludvik, Bresciani, 142 Major, C. H., 94 Makela, J. P., 199 Maki, Peggy, 22–23, 29, 40, 90, 135, 137 mapping exercises, 62, 65 Marshall, David, 56, 69, 91–92, 164, 192, 193, 194, 208–209 “McDonaldization” of assessment, 193–194 McNair, Tia Brown, 164, 192 McTighe, J., 53, 111, 113 meaning-constructing tasks, 112, 115 The Meaningful Writing Project, 37

meaning-making tasks, 112, 167 Melzer, D., 115 metaphors, use of, 66–67 Metzler, E. T., 172 Meyer, J. H. F., 68 Montenegro, E., 33, 185 Morrissey, Joanna “Jo,” 177–179 Nadeau, J. P., 112, 115 National Communication Association, 9 National Institute of Learning Outcomes Assessment (NILOA): Assignment Charette Toolkit, 112–113, 122, 125; assignment focus of, 110; assignment library of, 15, 65; Building a Narrative via Evidence-Based Storytelling

Driscoll et al_Advancing Assessment for Student Success.indb 226

(Jankowski and Baker), 199–200; improvement and compliance viewpoint of, 208–209; transparency framework of, 206; work of, 9; writings of, 22 National Survey of Student Engagement (NSSE), 40, 110, 111 native students, literacy skills of, 197–198 New York Film Academy, 67–68 North Carolina A&T State University, 210 NSSE (National Survey of Student Engagement), 40, 110, 111 Office of Assessment of Teaching and Learning (WSU), 203 opening strategy, 25–26 open-mindedness, as reflection attitude, 168 oppression of assessment, 21 oral communication, assessment of, 169–170, 182–183 outcomes, 5–6, 33–34. See also learning outcomes Page, Scott, 185 Palmer, Parker, 1–2, 27 Parsa, Amir, 103 participation, influence of, 10 partners, choosing of, 25–26 pedagogy, 20, 25, 61–69, 78 personal reading history, 24 Peterson, Ken, 26 Phillips Graduate Institute, 14 PLOs (program learning outcomes), 62–65 Portland State University (PSU), 23 power, dynamics of influence and, 41–44

03-06-2021 07:18:51 PM

index  

Power, M., 83 power dynamics, of faculty and students, 22 Pratt Institute, 12, 103, 156 Pratt School of Art Assessment Plan and Reports, 156 prerequisites, building of, 100 primary suppositions, 98–99 privilege, 22, 41–44 problems, naming of, 168 procedural knowledge, content knowledge versus, 59 professional development, assessment and, 14–16 program improvement, 7 program learning outcomes (PLOs), 62–65 program review, 7 promotion, guidelines for, 139 prompts, assignment: defined, 111; effective, 114–115; faculty role within, 115–123; learning points from, 128; overview of, 109, 110; peer review of, 123–124; as social practice, 113; student feedback from, 123–124 prompts, reflection, 171 PSU (Portland State University), 23 psychology of learning, as curriculum source, 98 psychology program, PLOs for, 63 quality, focus on, 7 racial diversity, 38 Real-Time Student Assessment (Maki), 23, 90 reflection: assessment practices and, 12–14; attitudes within, 167–168, 172; capstone requirement assessments of,

Driscoll et al_Advancing Assessment for Student Success.indb 227

227

183–184; on cocurricular learning, 179–180; within community, 167; compartmentalized spaces within, 176; DEAL framework within, 175; as disciplined activity, 167; faculty impacts of, 170–175; of integrative knowledge, 177–179; knowledge communities as, 180; as meaning-making process, 167; metareflection, 176–177; overview of, 161–165, 185; phases of, 168–170; prompts for, 171; qualities of, 166; Standard Assessment of Information Literacy (SAILS), 181–182; with students, 180–185; teaching and assessment in the curriculum and cocurriculum and, 175–180; time for, 161, 170; unpacking of, 165–170; “What? So what? Now what?” framework within, 175 reflective blogs, 94–95 Reichard, David, 183–184 relationships, building of, 27 reports, assessment, 199–202 research-based essay, communicative context for, 120–122 resistance, by faculty, 2, 4–5, 6, 13 resources, privilege access to, 42–43 respect, within curriculum mapping, 83, 86 responsibility, as reflection attitude, 168 Rhodes, Terry, 98–99 Rodgers, Carol, 165, 166, 168, 169, 172, 185 role-playing, 104 Rooney, G. S., 199 Roscoe, David, 162–163, 164 Rosenberg, Becky Reed, 161–162

03-06-2021 07:18:51 PM

228  

index

rubrics: aligned, 15, 37–38; assignment guides and, 116–120; conversations and, 58–59; within gallery learning experience (GaLE), 103; information literacy, 158; integrative knowledge, 178; issue/problem, 119, 121. See also VALUE rubric Safie, Omar, 151 SAILS (Standard Assessment of Information Literacy), 181–182 Salt Lake City Community College (SLCC), 194 San Francisco State University, 96 San José State University, 124 Sayegh, Sharlene, 150 scaffolding, 99–104, 113 scheduling, of assessment, 13, 29 sequencing, within curriculum, 99–104 sharing, economy of, 143 SLCC (Salt Lake City Community College), 194 Slowensky, Joe, 150 Slown, Corin, 155 SMART learning outcomes, framework of, 53–55 social actions, genres as, 113 social science, agency within, 34 Soliday, M., 113, 114, 120 staff, classroom practices of, 154 Standard Assessment of Information Literacy (SAILS), 181–182 Stein, Julie, 124 storytelling, communicating assessment by, 199–200 Strategic Literary Initiative, 24 structural diversity, 38

Driscoll et al_Advancing Assessment for Student Success.indb 228

student achievement data: defined, 164–165; goals within, 164; for improvement, 144–145; influence of, 10; motivations for, 155; support for, 138–140; use of, 154, 201 student affairs, 26–27, 43, 67–68, 83, 84–85 student assignments, design of, 15. See also assignments students: agency, within assessment, 9–10; assessment communication and, 209–213; assessment involvement of, 201–202; capstone requirement assessments of, 183–184; community participation by, 26; within curriculum mapping, 88–90; as curriculum source, 98; decisionmaking of, 26; engagement, potential for, 77–78; faculty collaboration with, 152–153; identities of, 22–25; lack of power of, 41–42; learning outcome collaboration by, 69–72; learning styles of, 22–25; listening to, 103–104; reflecting with, 180–185; responsibilities of, 28; transparency with, 211–213 subject specialists, as curriculum source, 98 Sumison, J., 83 Summit, Jennifer, 96, 101 support system, belonging and, 26 survey, 23–24, 40 Suskie, Linda, 22, 29, 39, 138, 139, 140, 201–202 Swarat, Su, 152, 204–205

03-06-2021 07:18:51 PM

index  

synthesis, as cognitive domain, 51 taxonomy of possibilities, improvement potential within, 145–148 taxonomy of significant learning, 51, 52, 144–145 teachers, as caring, 28–31. See also faculty tenure, guidelines for, 139 terms, exploration of, 141 think aloud, strategy of, 104 Thompson, R. J., 11 threshold concepts, 68 TILT (Transparency in Learning and Teaching), 105, 110, 114–115 time, challenges within, 135 Tinberg, H., 112, 115 To Imagine a Verb (Adelman), 51–52 transfer blog, 94–95 transfer of learning, 94 transfer students, literacy skills of, 197–198 transparency, 16, 104–105, 205, 206, 211–213 Transparency in Learning and Teaching (TILT), 105, 110, 114–115 trust, within curriculum mapping, 83, 86 tuning, 8–9, 59–60 Tyler, Ralph, 96, 97, 98 Undergraduate Learning Outcome (ULO), 109 University Learning Outcomes Scholar Program, 143

Driscoll et al_Advancing Assessment for Student Success.indb 229

229

University of California, Los Angeles, 34, 88–90 University of California, Riverside, 151 University of California at Merced, 139, 157 University of Minnesota, 125 University of San Diego (USD), 40, 179–180 University of San Francisco, 61–62, 83 University of the Pacific, 181–183 University of Washington Bothell, 124 use, defined, 10 Using Evidence of Student Learning to Improve Higher Education (Kuh et al.), 189, 193 VALUE rubric: challenges of, 126; conclusion of, 110; criteria from, 127; use of, 66, 109, 163, 179–180 verbs, within learning outcomes, 50–52, 70–71, 144 Victor Valley College, 151–152 visuals, communication with, 201 Wabash-Provost Scholars Program (WPSP), 210 Wabash study, 154 WAC (writing across the curriculum) workshops, 125 Waldrup-Patterson, Vivian, 190–193 Washington State University (WSU), 7, 62, 139–140, 203–204, 207

03-06-2021 07:18:51 PM

230  

index

“Ways of Knowing, Doing, and Writing in the Disciplines” (Carter), 59 webbing strategy, 93 websites: American Historical Association, 9; communication across the disciplines, 126; CSU Monterey Bay, 91; digital commons, 126; National Communication Association, 9; NILOA, 65; Stephanie Evergreen, 202; University of Washington Bothell, 124; Western Senior College and University Commission, 13 WEC (writing-enriched curriculum program), 125 Westcliff University, 152 Western Senior College and University Commission, 13 “What? So what? Now what?” framework, 175 whole-heartedness, as reflection attitude, 167 Wiggins, G., 53, 111, 113

Driscoll et al_Advancing Assessment for Student Success.indb 230

Winkelmes, Mary-Anne, 34, 105, 114–115 Wise, K., 41, 194–195 Wolf, Ralph, 136 Wood, J., 10–11 Wood, Swarup, 80, 93 workshops, 17, 43, 115–118, 122, 125–128, 157, 208–209 WPSP (Wabash-Provost Scholars Program), 210 writing across the curriculum (WAC) workshops, 125 writing assignments, 111–112, 113, 115 writing-enriched curriculum program (WEC), 125 written communication assignment guide, 131–133 Wrynn, A., 152 WSU (Washington State University), 7, 62, 139–140, 203–204, 207 Xiang, Jenny, 157 Zull, James, 66

03-06-2021 07:18:51 PM

Also available from Stylus

Real-Time Student Assessment Meeting the Imperative for Improved Time to Degree, Closing the Opportunity Gap, and Assuring Student Competencies for 21st-Century Needs Peggy L. Maki Foreword by George D. Kuh “Peggy Maki, not surprisingly, again advances the practice and the fundamental reasons for engaging in assessment of student learning. Maki issues a clarion call for equity to be the center of our commitment for action. Her call is for timely action that produces the evidence of improved student learning, enhancing at every step both retention and completion, and quality of learning for success in life, work and lifelong flourishing.”—Terrel L. Rhodes, Vice President, Office of Quality, Curriculum, and Assessment, Association of American Colleges and Universities

Transparent Design in Higher ­Education Teaching and Leadership A Guide to Implementing the Transparency Framework Institution-Wide to Improve Learning and Retention Edited by Mary-Ann Winkelmes, Allison Boye, and Suzanne Tapp Foreword by Peter Felten and Ashley Finley “Who knew we could enrich and deepen learning by clearly explaining to students what they should focus on and why it matters! This book takes the mystery out of improving learning and teaching by appropriating a powerful idea hiding in plain sight to concentrate student and instructor effort on understandable, purposeful educational tasks adaptable to any classroom, lab or studio.” —George D. Kuh, Chancellor’s Professor Emeritus of Higher Education, Indiana University; Founding Director, National Survey of Student Engagement, National Institute on Learning Outcomes Assessment

Driscoll et al_Advancing Assessment for Student Success.indb 231

03-06-2021 07:18:53 PM

Degrees That Matter Moving Higher Education to a Learning Systems Paradigm Natasha A. Jankowski and David W. Marshall “This book is an important reminder of the necessity for college and university actors to become aware of the critical role they play in the construction of effective learning environments. The authors advocate for a renewed sense of agency where students, faculty, and administrators do not succumb to a culture of compliance. The authors not only ask for a more active and conscious participation in the construction of learning environments, but also for a more honest and public dialogue about the dynamics that work or do not work in higher education institutions. This book is required reading for educational leaders who want to construct creative, caring, and collaborative forms of learning in higher education institutions.” —Teachers College Record

Facilitating the Integration of Learning Five Research-Based Practices to Help College Students Connect Learning Across Disciplines and Lived Experience James P. Barber Foreword by Kate Mc Connell “Facilitating the Integration of Learning provides invaluable information for educators. Barber adeptly uses findings from the Wabash National Study of Liberal Arts Education to illuminate practices that support students’ abilities to integrate of learning in a multitude of contexts. Rather than dictating how to engage in these practices, Barber provides readers with the tools to reflect upon, design, and assess educational experiences that promote integration of learning. It is a must read for college educators.”—Rosemary J. Perez, Assistant Professor, School of Education, Iowa State U ­ niversity

22883 Quicksilver Drive Sterling, VA 20166-2019

Driscoll et al_Advancing Assessment for Student Success.indb 232

Subscribe to our e-mail alerts: www.Styluspub.com

03-06-2021 07:18:57 PM