Improving Assessment in Higher Education : A Whole-of-Institution Approach 9781742246628, 9781742234007

The result of an innovative, three-year study aimed at improving the quality of teaching and learning at the University

172 69 4MB

English Pages 385 Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Improving Assessment in Higher Education : A Whole-of-Institution Approach
 9781742246628, 9781742234007

Citation preview

Improving Assessment in Higher Education Emeritus Professor Richard Henry works as a consultant. He is the immediate past Vice President and Deputy Vice-Chancellor (Academic) at the University of New South Wales. He is Chair of the Board of the Centre for Social Impact, a trustee of Sydney Grammar School and a Director of Children’s Cancer Institute Australia. He was appointed a Member of the Order of Australia in 2007 for service to paediatric respiratory medicine as a clinician, researcher, educator and mentor, and for serving in a range of roles with professional medical organisations. Professor Stephen Marshall is Director of Learning and Teaching and Professor of Higher Education in the School of Education in the Faculty of Arts and Social Sciences at the University of New South Wales. Prior to his appointment at the University of New South Wales, Stephen held the positions of Director of the Learning and Teaching Centre and Professor and Director of the Institute for Higher Education, Research and Development at Macquarie University. His research interests focus on academic leadership and management development and the educational change process in higher education. He is the immediate past Vice-President of the Council of Australian Directors of Academic Development and is a reviewer for many journals in the broad fields of higher education and educational leadership. Professor Prem Ramburuth is President of the Academic Board and Professor in International Business in the Australian School of Business at the University of New South Wales. She is recognised nationally and internationally for her scholarship in learning and teaching and has been the recipient of several teaching excellence awards. She has published in high-ranking journals in higher education and international business, is on the editorial board of the Academy of Management Learning and Education (AMLE), and is a reviewer for many journals in both her disciplines.

ImprovingAssessmentText2Proof.indd 1

11/11/13 3:35 PM

ImprovingAssessmentText2Proof.indd 2

11/11/13 3:35 PM

Improving Assessment in Higher Education A whole-of-institution approach

Edited by Richard Henry Stephen Marshall Prem Ramburuth

ImprovingAssessmentText2Proof.indd 3

11/11/13 3:35 PM

A UNSW Press book Published by NewSouth Publishing University of New South Wales Press Ltd University of New South Wales Sydney NSW 2052 AUSTRALIA newsouthpublishing.com © UNSW Learning and Teaching Unit 2013 First published 2013 10 9 8 7 6 5 4 3 2 1 This book is copyright. While copyright of the work as a whole is vested in the UNSW Learning and Teaching Unit, copyright of individual chapters is retained by the chapter authors. Apart from any fair dealing for the purpose of private study, research, criticism or review, as permitted under the Copyright Act, no part of this book may be reproduced by any process without written permission. Inquiries should be addressed to the publisher. National Library of Australia Cataloguing-in-Publication entry Title: Improving assessment in higher education: a whole-of-institution approach / edited by Richard Henry, Stephen Marshall and Prem Ramburuth. ISBN: 9781742234007 (paperback) 9781742246628 (ePDF) Notes: Includes index. Subjects: University of New South Wales. Educational tests and measurements – New South Wales. Examinations – New South Wales. Educational change – New South Wales. Other Authors/Contributors: Henry, Richard L. (Richard Leigh), editor. Marshall, Stephen J. (Stephen John), editor. Ramburuth, Prem, editor. Dewey Number: 371.271 Design Josephine Pajor-Markus Cover design Xou Creative Printer Griffin Press This book is printed on paper using fibre supplied from plantation or sustainably managed forests.

ImprovingAssessmentText2Proof.indd 4

11/11/13 3:35 PM

Contents List of contributors Foreword Preface Abbreviations Part I 1 International and national contexts in higher education 2 The institutional context for change 3 The UNSW approach to improving assessment Part II 4 The Faculty of Arts and Social Sciences: A whole-of-faculty assessment tool 5 The Australian School of Business: Re-thinking assessment strategies 6 The Faculty of the Built Environment: Improving approaches to assessment through curriculum renewal 7 The COFA experience: Assessment reform and review 8 The Faculty of Engineering: Beyond professional accreditation 9 The Law School and the Assessment Project

ImprovingAssessmentText2Proof.indd 5

vii ix xi xiii

2 25 39

76 99 127

147 166 189

11/11/13 3:35 PM

10 The Faculty of Medicine: Diversity, validity and efficiency of assessment 11 The Faculty of Science: The challenge of diverse disciplines 12 UNSW Canberra: A university within a university

208 234 263

Part III 13 Faculty responses to the Assessment Project 14 Institutional outcomes of the Assessment Project 15 Lessons learnt about whole-of-institution educational change

282 314 339

Index

360

vi Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 6

11/11/13 3:35 PM

List of contributors »» Dr David Blaazer, Associate Dean (Education), The University of New South Wales, Canberra »» Professor Sean Brawley, Associate Dean (Education), Faculty of Arts and Social Sciences, The University of New South Wales »» Dr David Clements, Associate Dean (Academic), Faculty of Engineering, The University of New South Wales  »» Associate Professor Julian Cox, Associate Dean (Education), Faculty of Science, The University of New South Wales »» Graham Forsyth, Associate Dean (Academic), College of Fine Arts, The University of New South Wales »» Emeritus Professor Richard Henry AM, Immediate past Vice President and Deputy Vice-Chancellor (Academic), The University of New South Wales »» Professor Philip Jones, Associate Dean (Education), Faculty of Medicine, The University of New South Wales »» Associate Professor Glenda Lawrence, Associate Dean (Postgraduate Coursework), Faculty of Medicine, The University of New South Wales »» Dr Louise Lutze-Mann, Learning and Teaching Fellow, The School of Biotechnology and Biomolecular Sciences, Faculty of Science, The University of New South Wales »» Dr Nancy Marshall, Senior Lecturer, Faculty of Built Environment, The University of New South Wales »» Professor Stephen Marshall, Director of Learning and Teaching, The University of New South Wales

vii

ImprovingAssessmentText2Proof.indd 7

11/11/13 3:35 PM

»» Lois Meyer, Senior Research Fellow, School of Public Health and Community Medicine, Faculty of Medicine, The University of New South Wales »» Dr Loretta O’Donnell, Associate Dean (Education), Australian School of Business, The University of New South Wales  »» Dr Jane Paton, Project Officer, Learning and Teaching Unit, Faculty of Science, The University of New South Wales »» Professor Prem Ramburuth, President of the Academic Board, The University of New South Wales »» Iona Reid, Manager, Learning and Teaching Unit, Faculty of Science, The University of New South Wales »» Associate Professor Alex Steel, Associate Dean (Education), Faculty of Law, The University of New South Wales »» Dr Rachel Thompson, Learning and Teaching Fellow, Faculty of Medicine, The University of New South Wales »» Lisa Zamberlan, Senior Lecturer, Faculty of Built Environment, The University of New South Wales

viii Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 8

11/11/13 3:35 PM

Foreword Over the last five years, universities, particularly research-intensive universities such as UNSW, have had to cope with a number of challenges. First, funding per student has remained at best flat, while research funding, both for projects and overheads, is declining. Second, competition for top students, both domestically and internationally, is intensifying. Students are more demanding of quality, producing surveys such as the International Student Barometer that highlight good and poor performance. Third, we face vigorous competition in attracting and retaining top staff. As a result, workload pressure on academic staff seems to increase relentlessly. Academic staff need to teach well, albeit with more students, while keeping up research output, especially in top journals. Consequently the issue of workloads has become more prominent. This book explains and reviews one response to these pressures. As we discussed workloads in our leadership group at UNSW, it became clear that assessment of student work and providing feedback were major users of time. Moreover, from a student perspective, the quality of assessment was seen as a critical factor in determining course satisfaction. After discussion and review of a sample of current practices, we were convinced that the opportunity to provide better assessment with less academic time and effort could be substantial. Assessment, like much of what occurs in universities, is a ‘cottage industry’. Each staff member has their own favourite process, and evaluation of the effectiveness and efficiency of assessment is rare. There is little perception that change is needed, and little willingness to change accordingly.

ix

ImprovingAssessmentText2Proof.indd 9

11/11/13 3:35 PM

This book chronicles how UNSW tackled the challenge of improving assessment. While the authors are the main players in driving the change, what makes the work particularly insightful are the detailed chapters on each faculty’s experience. At a time when we are continually expected to do better with less, I commend this work for its rigour, as well as its practical insights on how assessment can be improved for the benefit of both staff and students. Professor Frederick G Hilmer President and Vice-Chancellor, The University of New South Wales

x Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 10

11/11/13 3:35 PM

Preface This book had its origins in a desire by the senior executive at UNSW to reduce the workload of academic staff. Assessment of students’ work constitutes a significant proportion of the teaching workload of an academic. We hoped that, by decreasing the time that academic staff devoted to assessment, we would make a meaningful impact on lessening their teaching load. The President and Vice-Chancellor set the Vice President and Deputy Vice-Chancellor (Academic) the task of improving the efficiency of student assessment at the same time as maintaining or improving the quality of assessment. A three-year project was launched in March 2010 at a forum attended by Heads of School and the Vice-Chancellor’s Advisory Committee (including faculty Deans and other senior executives). The Director of Learning and Teaching presented two scenarios of a theoretical new subject. In the first scenario, teaching and assessment were conventional; in the second the quality of the teaching and assessment was far superior and involved fewer staff and lower student workloads. Faculty response to the suggestion that there was great potential in implementing this model across the university was polite but unenthusiastic. However, both the Vice-Chancellor, Professor Frederick Hilmer, and the Deputy Vice-Chancellor (Academic), Emeritus Professor Richard Henry, were convinced and Professor Henry became the project’s champion. The Director of Learning and Teaching, Professor Stephen Marshall, provided intellectual leadership and the key implementation group was

xi

ImprovingAssessmentText2Proof.indd 11

11/11/13 3:35 PM

composed of the Associate Deans (Education) in each faculty. During the course of the project, Professor Prem Ramburuth, Associate Dean (Education) in the Australian School of Business, was elected as the President of the Academic Board. She also chaired a subcommittee of the Academic Board: the Committee on Education. Thus, the trio of co-editors all had key and complementary leadership roles in the project. However, no meaningful outcomes would have been achieved without the individual and collective efforts of the nine Associate Deans (Education), one from each of the eight faculties and one from UNSW Canberra. We acknowledge their enthusiasm, commitment, hard work and effectiveness in making the assessment project a reality and, of course, the chapters that they have written for this book. The book is in essence a case description of the collective journey and outcomes across the disciplines. However, it contains valuable lessons as to how an institution can approach change in teaching and assessment, at the same time facilitating reduced workloads for academic staff and improving the student experiences. Richard Henry Stephen Marshall Prem Ramburuth

xii Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 12

11/11/13 3:35 PM

Abbreviations AACSB

Association to Advance Collegiate Schools of Business ADA Associate Dean (Academic) ADE Associate Dean (Education) AD(PGC) Associate Dean (Postgraduate Coursework) ADFA Australian Defence Force Academy AeLP adaptive eLearning platform AEQ assessment experience questionnaire Australian Graduate School of Management AGSM AHELO assessment of higher education learning outcomes AIB assessment item bank ALTC Australian Learning and Teaching Council AOL assurance of learning Australian Qualifications Framework AQF ASB Australian School of Business ATAR Australian Tertiary Admission Rank AUQA Australian Universities Quality Agency BA Bachelor of Arts BABS Biotechnology and Biomolecular Sciences BE Built Environment CATEI course and teaching evaluation and improvement CEQ course evaluation questionnaire CLTD Coordinator, Learning and Teaching Development COE Committee on Education COFA College of Fine Arts CPR calibrated peer review

xiii

ImprovingAssessmentText2Proof.indd 13

11/11/13 3:35 PM

CSE CSV DEEWR DLT DVCA ED EMBA EMPA EMS ERA FARG FASS FBE FRLT FULT GCULT GDS Go8 HDR HOS HP ICT IT ITIP JD KPT LC L&T LIC LLB LMS LSSSE

Computer Science and Engineering comma separated value Department of Education, Employment and Workplace Relations Director of Learning and Teaching Deputy Vice-Chancellor (Academic) educational development Executive Master of Business Administration English, Media and Performing Arts educational media system Excellence in Research for Australia Faculty Assessment Review Group Faculty of Arts and Social Sciences Faculty of Built Environment Faculty Review of Learning and Teaching Foundations of University Learning and Teaching Graduate Certificate in University Learning and Teaching Graduate Destination Survey Group of Eight higher degree research Heads of School History and Philosophy information and communications technology information technology Information Technology Investment Plan Juris Doctor key performance target learning centre learning and teaching Lecturer in Charge Bachelor of Laws learning management system Law School Survey of Student Engagement

xiv Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 14

11/11/13 3:35 PM

LTAC LTF LTG LTPF LTU MAPPS MBA MBT MCom MCQ MHM MIPH MLU MOOC MPH MPhilHE MSE OECD

Learning and Teaching Advisory Committee Learning and Teaching Fellow learning and teaching group Learning and Teaching Performance Fund learning and teaching unit Management of Proposals and Portfolio System Master of Business Administration Master of Business and Technology Master of Commerce multiple-choice question Master of Health Management Master of International Public Health marking load units massively open online course Master of Public Health Master of Philosophy in Higher Education Material Science and Engineering Organisation for Economic Co-operation and Development OH&S occupational health and safety OSCE objective structured clinical examination PAB President of Academic Board PG postgraduate Doctor of Philosophy PhD program learning goal PLG PLO program learning outcome PVC Pro-Vice-Chancellor QA quality assurance QI quality improvement quality verification system QVS RED research evaluation and development RMC Royal Military College S1 Semester 1 S2 Semester 2

Abbreviations xv

ImprovingAssessmentText2Proof.indd 15

11/11/13 3:35 PM

SASS SLTU SOTL SPHCM TELT TEQSA TESTA

student and administration systems Science Learning and Teaching Unit scholarship of teaching and learning School of Public Health and Community Medicine technology enabled learning and teaching Tertiary Education Quality and Standards Agency transforming the experience of students through assessment threshold learning outcome TLO UG undergraduate UNSW The University of New South Wales unit of credit UOC VC Vice-Chancellor Vice-Chancellor’s Advisory Committee VCAC Vice President Finance and Operations VPFO

xvi Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 16

11/11/13 3:35 PM

PART I

ImprovingAssessmentText2Proof.indd 1

11/11/13 3:35 PM

1

International and national contexts in higher education Prem Ramburuth, Richard Henry and Stephen Marshall

The institutional level Assessment Project at UNSW was conducted in a period of changing dynamics in the higher education sector. This chapter seeks to capture key aspects of these changes and consider their implications for universities nationally and internationally, as well as the more direct implications for UNSW. The higher education sector across the world is undergoing significant changes. Reports nationally and internationally (Commonwealth of Australia, 2009; Gallagher 2010; Universities UK, 2012) reflect these changes and the implications for the delivery of efficient and effective education agendas. While higher education systems put in place to manage these changes may differ across countries, the general trends indicate that the changes are ‘being driven by a number of factors: political, cultural, economic and technological. The trends are global in their scope and far reaching in their import. They affect every aspect of university provision ...’ (Universities UK, 2012, p.2). These drivers of change, while seemingly discrete, are ‘interconnected’ in their impact, as demonstrated in this and other chapters of the book. They are creating an environment in the higher education sector that requires institutional

2

ImprovingAssessmentText2Proof.indd 2

11/11/13 3:35 PM

rethinking of strategic directions at many levels, including overall institutional management, modes of teaching delivery, sustainable funding sources and strategies for ensuring the quality of the student experience. This chapter seeks to explore some of the interacting factors that are driving the changes and shaping the architecture of universities in the future.

Political drivers and demands for accountability A strong political driver of change in higher education is the fact that governments in many countries are increasingly exerting influence (and even pressure) on universities to be more accountable for the results they produce with the resources made available to them. This trend is highlighted by education experts including Salmi (2009, p.vi), who notes that ‘Governments, parliaments, and the public are increasingly asking universities to justify their use of public resources and account more thoroughly for their teaching and research results’. Others such as Brookes and Beckett (2013) assert that such accountability is required to ensure effective measures for a quality product, as well as for the added benefits of national and international benchmarking. Accountability is manifested in different ways. In the US, the quality system seeks to ensure the delivery of quality outcomes largely by internal institutional standards and program reviews but is also driven by external accreditation systems of accountability and large-scale program assessment by public agencies, with no overarching national qualifications framework (Gibeling, 2010). This is somewhat different from the UK and Australian systems where there are national quality qualifications frameworks, the former having more advanced systems than Australia. In the UK, for example, the drivers of accountability and quality improvement in higher education institutions are the National Qualifications Framework that sets academic expectations associated with each award, as well as the skills and competencies expected of receivers

International and national contexts in higher education 3

ImprovingAssessmentText2Proof.indd 3

11/11/13 3:35 PM

of the awards; Subject Benchmark Statements that provide details of standards expected in the disciplines; Programme Specifications that identify intended learning outcomes; and the Quality Assurance Agency Code of Practice that provides guidance on quality and standards that all UK higher education institutions are required to meet (Quality Assurance Agency, 2013). Aspects of these quality agencies and systems of accountability have clearly influenced the Australian approach that was extensively overhauled and ‘modernised’ on the recommendation of the Bradley Report (Bradley et al., 2008), to ensure that the existing Australian quality assurance systems were more in tune with the quality management standards and systems adopted in other parts of the world. This major national review, undertaken by Professor Denise Bradley, called for a restructured approach to accrediting Higher Education institutions in Australia and to quality assurance, with an emphasis on the ‘measurement of outcomes’ (Harris and James, 2010, p.100). The quality systems and agencies now include a national qualifications framework, the Australian Qualifications Framework (AQF); National Protocols for Higher Education Approval that set out criteria for approving the establishment of higher education institutions; Higher Education and Research Threshold Standards that set the minimum standards for auditing and reporting; and the Tertiary Education Quality and Standards Agency (TEQSA) that oversees implementation of the standards and conducts regular independent quality audits (Gallagher, 2010; Norton, 2013). In relation to quality in teaching, learning and assessment, the revamped AQF seeks to provide criteria for common understanding of qualification types, as well as an outline of the skills, levels of knowledge and learning outcomes expected in each qualification attained by students. Internal mechanisms include a range of institutional quality monitoring strategies, including academic governance and oversight by Academic Boards. Universities and tertiary institutions in Australia have generally

4 PART I

ImprovingAssessmentText2Proof.indd 4

11/11/13 3:35 PM

taken heed of the need for improved accountability but the role of TEQSA has brought mixed responses and many challenges, reflecting experiences in the US, UK and Canada when increasing demands for accountability from universities were introduced. Some universities established TEQSA compliance units and appointed dedicated staff with the explicit task of ensuring adherence to the AQF and higher education standards, at an enormous cost to the institutions. Others, such as UNSW, responded by resisting the external standards and by setting internal standards with oversight by their Academic Boards. There was dissatisfaction too, with the risk model adopted and implemented by TEQSA. Craven and Davis (2013, p.29) suggest that a fundamental flaw ‘… in the legislation creating TEQSA assumes that higher education (in Australia) is at a risk …’, while in reality, the sector is a successfully nurtured and developed $15 billion dollar education export industry! They also suggest TEQSA has moved away from its principle of proportionality and an intended (and expected) ‘light touch’ approach, that would ‘encourage minimal intrusion into the affairs of established, highly reputable institutions’ (p.29). Researchers and educationists such as Gallagher (2010) reiterate the sentiment that, increasingly, publicly funded universities in the UK, US, Canada, Australia and across many countries in Asia, which once had relatively high levels of autonomy, are now faced with greater government demands for accountability. Others such as Professor Hazelkorn, head of the Dublin-based Higher Education Policy Research Unit, are more directly critical of regulators whose demand for greater accountability infringes on the autonomy on which universities have based the delivery of their agendas over hundreds of years. She points to TEQSA as being a regulator that ‘fits at the extreme of the emerging trend’ (Lane, 2013a, p. 33), a perspective shared by the Chair of the Group of Eight Universities in Australia and Vice-Chancellor of UNSW, Professor Fred Hilmer (Lane, 2013b), who suggests that all that is required

International and national contexts in higher education 5

ImprovingAssessmentText2Proof.indd 5

11/11/13 3:35 PM

is a ‘light touch’ approach that provides oversight and trust in the integrity and autonomy of universities. Norton (2013, p.16) further highlights this shift from universities exercising self-accreditation of their programs via their Academic Boards (within a framework established by government) to a situation where the self-accreditation status has been ‘diluted by the TEQSA reforms’ and ‘under TEQSA, universities must be periodically re-registered with the potential for their self-accreditation power to be removed or qualified’. Accreditation will now be determined by a university’s performance against a set of Learning and Teaching and Research Threshold Standards, currently being refined. The ‘one size fits all’ approach taken by TEQSA has not been well received.

Employability skills and graduate capabilities Skills shortages in important areas of the workforce (countered by unemployment in other areas) and employer expectations in relation to graduate skills required for the workplace are also driving governments (and employers) to make greater demands of higher education institutions. In addition, while future employment may not always be the reason for undertaking university education, it certainly is the outcome desired by most students. There is an expectation that graduates will attain both the knowledge and capabilities to ensure good access to employment, and in so doing, address student aspirations, labour market demands, government agendas and employer and community expectations. In the US, in addition to the agenda of enhancing standards and delivering quality post-secondary education, the government’s quality agenda has become increasingly linked to ensuring the development of graduate employability skills to address the needs of labour markets, especially in the post–global financial crisis period. For example, in their book American Higher Education in the Twenty-First Century, Altbach, Berdahl and Gumport (2005)

6 PART I

ImprovingAssessmentText2Proof.indd 6

11/11/13 3:35 PM

refer to the importance of universities responding to the need for skilled workers in tight labour markets, while a World Bank Report (Salmi, 2009, p. vi) stresses the notion of ‘relevance’ with regard to university qualifications. In the UK and Europe, ‘skills’ agendas in higher education have moved ahead and are ‘aimed at shifting toward a high skilled economy, alongside longer compulsory education and training, [and] indicate a trend toward higher education becoming the primary entry point into the labour market’ (Universities UK, 2012, p.5). At the national level in Australia, Norton (2013) asserts that, with the shifts and changes in labour markets and resultant skills shortages, governments may be justified in seeking to address the economy’s skills shortages and areas of high demand through influencing teaching and professional training agendas in the higher education sector. He notes that, previously, skills development ‘may not have been a systematic focus of earlier higher education policy’ (p.75) but changes are occurring. This can be observed in the recent trends in the allocation of government funding in the higher education sector; for example, in the increased funding of places in the disciplines of teaching, nursing, health, mining engineering and so on, in the current time of shortages and earlier increases in funding of places in the disciplines of engineering and science (Norton, 2013). Academic and professional agencies and networks have also begun influencing the inclusion of ‘skills agendas’ in the context of higher education. For example, networks such as the Australian Business Deans Council have sought to ensure the development of graduate capabilities in the academic context that will prepare graduates across the many business disciplines for careers in the private and public sectors. The Group of Eight (Go8) universities have implemented a pilot of a quality verification system (QVS) in 2011 and 2012 and have sought to compare and improve standards relating to learning outcomes and the assessment of graduate capabilities in common disciplinary courses offered by Go8 member

International and national contexts in higher education 7

ImprovingAssessmentText2Proof.indd 7

11/11/13 3:35 PM

institutions. A similar focus on preparation for the workplace (in university study) is evident in the increasingly influential role of discipline-based accreditation agencies. Engagement in these processes is often far more intense, rigorous and revealing of institutional performance than the government’s set minimum threshold standards. Many institutions in the Australian higher education sector, including UNSW, are increasingly setting their own standards for enabling students to acquire a range of capabilities and facilitating the Bradley Review recommendation that they ‘produce graduates with the knowledge, skills and understandings for full participation in society and the economy’ (Commonwealth of Australia, 2009, p.4).

Economic drivers of change Perhaps the most influential force causing a rethink of institutional strategy in the current higher education environment, and that will continue to drive change internationally and nationally, is the economy. The general lack of certainty in the higher education sector is accompanied by an even stronger lack of certainty surrounding the sources and sustained levels of funding required by institutions to effectively deliver their teaching and research agendas. The impact is being felt by higher education institutions in countries such as the UK, Canada, New Zealand and the public universities in the US, where there is a reliance on rapidly shrinking government funding. In earlier warnings in the US context, Altbach, Gumport and Berdahl (2005) drew attention to the challenges that universities would face in times of shrinking government funds, including having to contend with competing social responsibility agendas, the movement of critical government funds to other areas of priority (often politically driven) and increasing student intakes. Harris and James (2010) reiterate these funding challenges as they affect Australian higher education. They draw attention to the ‘steady decline in the proportion of university revenue provided

8 PART I

ImprovingAssessmentText2Proof.indd 8

11/11/13 3:35 PM

by government, with universities on average now receiving less than half of their revenue from public funding’ (p.99). The steady decline continues, as is evident in the planned government funding cuts of approximately $2.3 billion from the tertiary budget in the 2013–2014 budget announcements and the intention to reallocate a major portion of that funding to the Commonwealth Government’s priority area of school education. Universities have been left to decide for themselves how and where those cuts will be made in their institutions. Instability has also been created by the stop–start model of government funding policies, with leaders in the sector calling for an end to the uncertainty and for greater funding stability for better performance by universities and other tertiary institutions. This stop–start approach to funding in Australia was also experienced in the creation (in 2006) and subsequent closure (in 2010) of the Learning and Teaching Performance Fund (LTPF) administered by the Commonwealth Government Department of Education, Employment and Workplace Relations (DEEWR). The LTPF was established to reward universities for demonstrating excellence in teaching on a number of teaching, learning and quality scales. Performance was measured and rated on data gathered from universities through implementation of the Australian Graduate Survey, comprising the Graduate Destination Survey (GDS) and the Course Experience Questionnaire (CEQ). The GDS collects information on graduate employment (including occupation, employment rates and starting salary) and their engagement in further study, with the information also made available in the Good Universities Guide to assist new students in selecting universities and programs of study. The CEQ collects information on graduates’ experience and satisfaction with courses or programs of study, with scales based on good teaching, clear goals, appropriate assessment, appropriate workload and overall satisfaction (Harris and James, 2010). The data drawn from these scales (GDS and CEQ) informed

International and national contexts in higher education 9

ImprovingAssessmentText2Proof.indd 9

11/11/13 3:35 PM

the LTPF and its ratings and ranking of universities, with the most highly ranked universities receiving a financial bonus. It was explicitly a learning and teaching reward fund and not allocated as core government funding. It was modest and involved the distribution of $54 million in 2006. This was increased to $83 million in 2007 and a new ‘Improvement’ category was introduced in 2009 (Harris and James, 2010). The funding was welcomed by the sector, as the reward funds could be spent on learning and teaching improvement and innovation, including initiatives such as the Assessment Project at UNSW. Its cessation has restricted funded opportunities for innovations and improvements in learning and teaching. Other factors that contributed to challenges to the business models of universities include the introduction of the demanddriven funding model that intensified local market competition; the uncapping of student numbers, allowing student demand to be met with extra university places and, therefore, a compensatory greater need for additional teaching resources; and government intervention in setting student enrolment targets in areas consistent with political and economic agendas, without the allocation of appropriate levels of funding support (Campus Review, 2013). These actions, together with drivers such as the increasing demands for accountability have put enormous pressure on universities to deliver in financially uncertain times. More students, with lower average university entrance scores, are enrolled without proportionate growth in funding per student. The impact at the institutional level has become increasingly complex and challenging to manage. For example, the economic drivers and funding models encourage universities to have large group lectures and discourage high quality small group teaching, as funding is allocated for each student enrolled. Consequently, enrolling more students leads to universities being better off financially but conversely, they are also worse off in terms of the staff to student ratio. The drivers encourage the recruitment and teaching of large numbers of international students because this area of

10 PART I

ImprovingAssessmentText2Proof.indd 10

11/11/13 3:35 PM

setting fees is deregulated; at the same time, the diversity of the student population is intensified, requiring costly support systems. The drivers discourage taking on costly disadvantaged students because the extra financial margin allocated for teaching them is lower than the real cost of extra resources required to support the learning and development of these students. Contestability of government funding and the demand for greater accountability seem to be features that are here to stay, with universities being forced to do more with less. The cost of demonstrating compliance to regulatory authorities is diverting resources away from teaching and research. The search for non-government funding has made universities increasingly reliant on international student fees as a major revenue source (Brookes and Beckett, 2013). However, changing enrolment trends from source countries; students’ greater access to technology and course information (including program content, skills development, work opportunities, scholarships, funding support, quality and institutional reputation); and increased competition for market share have led to instability in this revenue stream. In their report, Ernst & Young (2012, p.8) make the observation that ‘universities in Australia will need to prepare for an environment where every dollar of government funding is contestable and any growth in funding comes from non-government sources – students, industry, philanthropists, and global collaborations – that are all competitive’.

Equity agendas and change Yet another driver of change in the higher education sector, internationally and nationally, has been a growing trend towards greater ‘social equity’ in education, and a move to provide greater access to educational opportunities for those in the community who do not have access to such opportunities. The move is seen as shifting the positioning of universities and other tertiary institutions from being the domain of the elite to the domain of capable learners from all

International and national contexts in higher education 11

ImprovingAssessmentText2Proof.indd 11

11/11/13 3:35 PM

backgrounds. The trend has contributed to positive flow-on effects such as the more inclusive development of ‘knowledge nations’, the preparation of more educated citizens and the training of better workforces (Gidley et al., 2010). It has also brought, and continues to bring, challenges in the form of the ‘massification’ of higher education and increasing student numbers, challenges in teaching a cohort that is more susceptible to dropping out or failing to perform academically at required levels, and challenges to address deficiencies in the level of support to enable and manage the learning of students whose experience is grounded in disadvantage. Similar trends have been reported in the UK, Canada, the US and Europe, as well as in Australia, with some of the challenges and accomplishments at UNSW being captured in the context of the book. In Australia, the political agenda for social change was signalled in the Bradley Report (2008) and reflected in the Commonwealth Government’s embracing of the recommendation that ‘higher education should provide opportunities for all capable people from all backgrounds to participate to their full potential and be supported to do so’ (Commonwealth of Australia, 2009, p.4). This was further embedded in a Commonwealth Government policy that sought to improve the participation of students from low socio-economic status backgrounds in higher education to 20 per cent of all undergraduate students by 2020 (DEEWR, 2009). While universities welcomed this socially responsible strategy, the initial funding support to ensure success for both the strategy and individual students has been limited and inadequate for sustainable support. The actual flow through to the learning-teaching-assessment interface, the increasingly diverse classroom context, the much needed support for the targeted students in the disciplines and support for academics untrained to deal directly with learning disadvantage, have all presented challenges as highlighted in recent studies and reports (for example, DEEWR, 2009; Ramburuth and Hartel, 2010).

12 PART I

ImprovingAssessmentText2Proof.indd 12

11/11/13 3:35 PM

Technology as a driver of change Universities are playing ‘a very significant role in incubating the new technologies that are currently shaping society, such as internet technology and the digitisation of content’ (Universities UK, 2012, p.18). While universities in the US are taking the lead, most universities are being made to rethink their current and future approach to education delivery in light of the emergence of digital technologies and their accelerated use. On one side of the current debate, there are suggestions that the impact of new technologies will result in a departure from site-based delivery of education to more flexible learner-selected options. On the other side, researchers (Ernst & Young, 2012) suggest that new technologies will facilitate the arrival of media companies on campuses to collaboratively or independently deliver programs. Others take a more moderate view, seeing the arrival of the technologies as a time to rethink entrenched approaches to delivery and explore new ideas and approaches to learning and teaching, especially with the enrolment of students who are highly competent in the use of the technologies. Many are excited about the opportunities for learning outside the traditional lecture and physical classroom. The goal posts seem to be shifting from educator centred delivery to student centred access. Together with the political, social and economic drivers of change, many (Albtach, 2002; Brookes and Beckett, 2013; Ernst & Young, 2012) see the digital technologies as the most disruptive, with the potential to have a far reaching impact on the architecture and modes of delivery in universities of the future. The impact is summarised by Ernst & Young (2012, p. 9) who note that ‘Digital technologies will transform the way education is delivered, supported and accessed, and the way value is created in higher education ...’. Consequently, leaders throughout the university sector are grappling with how to proceed in this unchartered territory of emerging digital technologies, the directions in which to lead

International and national contexts in higher education 13

ImprovingAssessmentText2Proof.indd 13

11/11/13 3:35 PM

their institutions, the extent of investments to be made and how to move forward in seeking an ‘unknown value add’. A summary of the dilemmas confronting universities internationally and nationally (Brookes and Beckett, 2013; Ernst & Young, 2012; Universities UK, 2012) includes: ʶʶ what directions to take in terms of infrastructure building for the future (e.g. more classrooms or fewer classrooms, larger classrooms or smaller classrooms, or high level technology equipped classrooms) ʶʶ how infrastructure will be funded in times of constrained funding (e.g. what priority will be given to technology in the ‘expenditure queue’ and what activities will need to be shifted to make way for new approaches to education delivery) ʶʶ the most effective strategies to adopt in the delivery of programs and courses via the use of technology (e.g. online courses for technology-savvy students, the development of massive open online courses (MOOCs) for positioning and reputation in the online space, the use of blended learning and flipped classrooms and the gradual opening up of the world of learning through technology) ʶʶ how to effectively train academic staff to deliver their teaching via the new technologies, particularly in light of having technologically competent students in their classes, with high expectations of delivery standards.

Implications for UNSW By tracing the drivers of change in the current international and national contexts of higher education, the discussion thus far has provided an insight into some of the most influential forces that have the potential to ‘shape the current and future architecture’ of universities and other tertiary institutions. The discussion lays the foundation for consideration of the flow-on effect at the institutional level, in particular, the impact on UNSW, its responses to

14 PART I

ImprovingAssessmentText2Proof.indd 14

11/11/13 3:35 PM

the drivers of change and strategies adopted, as well as the accomplishments and ongoing challenges – the focus of this book. Accountability and quality assurance

UNSW, together with its senior management, Academic Board and academic community, has made a clear decision to set its own quality standards (as it has always done), which it will use as a blueprint for the delivery of its teaching and research agendas. It has set quality standards that are higher than those required by the minimum standards of TEQSA. The Academic Board has been given the responsibility for monitoring and overseeing maintenance of the UNSW standards. It is strongly believed that, with this approach of rigorous quality assurance mechanisms, UNSW will ensure the delivery of high quality education, while meeting and indeed exceeding the standards set by TEQSA and other such agencies. Examples of these UNSW standards and their implementation (including the accomplishments and the challenges) are evident throughout the chapters of this book. The UNSW Assessment Project is a specific example of an initiative taken by the university to ensure quality through assessing learning outcomes across the disciplines, developing specific graduate attributes within disciplinary contexts, as well ensuring the effectiveness of program delivery. The UNSW Assessment Project is not only indicative of a commitment to establishing university-wide standards but also of attaining effectiveness in student learning and efficiencies in the workloads of academics. Graduate capabilities and employability skills

To ensure that it produces highly capable graduates who will possess the relevant competencies to perform as effective members of society, employees and leaders (in global and local contexts), UNSW has embarked on several initiatives that are core to the learning and teaching agenda of the university. Some of these initiatives and examples are outlined in this chapter and the chapters that follow.

International and national contexts in higher education 15

ImprovingAssessmentText2Proof.indd 15

11/11/13 3:35 PM

For example, parallel to embedding the UNSW graduate capabilities in the context of its related disciplines, the Faculty of Engineering’s engagement with the Engineers Australia accreditation process has enabled the systematic measurement of learning outcomes across its areas of specialisation and the demonstration of the skills and capabilities that its students have acquired. The engagement has also enabled national and international benchmarking of standards and quality across the specialisations in Engineering. The faculty’s determination to strengthen its approach to the development and assessment of graduate capabilities and work-ready skills led to its participation in an OECD project, the Assessment of Higher Education Learning Outcomes (AHELO) study which seeks to identify, in cohorts across the world, not only what students know but what they can do. Further information on this is provided in Chapter 8. Yet another example of enhancing and measuring skills development and learning outcomes is the Australian School of Business and UNSW Canberra’s engagement in the Association to Advance Collegiate Schools of Business (AACSB) accreditation process, which sets the highest international standards for assessing students’ performance and continuous learning, as well as their professional skills development in the business disciplines. Once again, this external measuring of graduate capabilities, professional skills development and program learning outcomes is conducted in parallel with the embedding of a set of internally agreed graduate attributes relevant to the Business disciplines. The approach enables international benchmarking at the highest level, as discussed in Chapter 5. Perhaps the clearest evidence of the UNSW’s approach to embedding academic and professional skills development in the context of the disciplines and measuring the effectiveness of learning outcomes is in the Faculty of Medicine. Chapter 10 provides insights into targeted training and assessing of essential professional skills development in communication and ethics, often

16 PART I

ImprovingAssessmentText2Proof.indd 16

11/11/13 3:35 PM

referred to as ‘soft skills’ but fundamental to working as a health professional. Interesting too, is the inclusion of a one year in-depth research project which is both unique to the medical program of study at UNSW and innovative in the preparation of graduates for higher order critical thinking and research skills that are transferable to advanced levels of study and career paths. Chapter 10 provides multiple examples of authentic assessments grounded in the dynamic complexities of real world professional dilemmas and practice. In addition to international benchmarking of effectiveness of learning outcomes and assessing of learning outcomes, faculties and their disciplines have and continue to engage in national benchmarking and setting of standards. UNSW’s participation in the Go8 QVS standards and benchmarking initiative has seen academic staff engage in the comparative grading of assessment tasks in courses in accounting, history, chemistry, economics and philosophy in 2011–2012. In 2013, UNSW participation in the QVS will include cross-institutional grading of assessment tasks in Psychology, Mathematics, English and Chinese/Mandarin. The initiative has enabled UNSW staff to engage in cross-institutional academic discussions around issues that arise out of the grading exercises and to identify areas for quality improvement and more effective attainment of student learning outcomes. Dual awards and other UNSW initiatives

A unique feature of the UNSW approach to producing graduates for both the current work environment and future careers and job opportunities is the extensive offering of dual awards. This was achieved in a recent (2010–2012) major review of all undergraduate programs at UNSW, in an initiative called ‘Program Simplification’. Just how the review of undergraduate programs in the faculties and the UNSW Assessment Project intersected and the mutual benefits obtained can be seen in the chapters that follow; for example, in the program reviews in the Faculty of the Built

International and national contexts in higher education 17

ImprovingAssessmentText2Proof.indd 17

11/11/13 3:35 PM

Environment (Chapter 6) and the College of Fine Arts (Chapter 7). The ‘Program Simplification’ project sought to streamline all undergraduate programs at UNSW and make it possible for students to enrol in multiple disciplines and attain dual awards across most faculties (e.g. Bachelor of Commerce and Bachelor of Fine Arts, Bachelor of International Studies and Bachelor of Media [Communication and Journalism], Bachelor of Science [Advanced] and Bachelor of Engineering), rather than being limited to more conventional combinations such as Bachelor of Arts and Bachelor of Laws or Bachelor of Commerce and Bachelor of Laws. Students have welcomed this approach and the success is evidenced in the increasing enrolments is dual award programs. The overall success in measuring learning outcomes across the disciplines is yet to be determined. Nevertheless, the strategic intent is to give UNSW graduates flexibility in developing disciplinary learning, graduate capabilities and professional skills in multiple disciplines. The approach ensures that students gain greater breadth of knowledge, are more innovative in choosing their careers and are ready to change jobs when a change of direction is required, especially in future employment markets. It is becoming increasingly evident that graduates will change their work and job preferences more frequently than in the past and that many of the careers that exist today will not be around in the next 10–20 years. Dual awards help to cater for these changing trends. To complement disciplinary learning, UNSW has also launched the UNSW Advantage program ‘to give its graduates a competitive edge and improve their career prospects and employability’ (Hobsons, 2012, p.428). The program aims to extend the student experience beyond ‘classroom learning’ and develop a wider set of graduate capabilities through 300–500 activities, including voluntary work in the community, internships, mentoring and opportunities for broader skills development. These, and other student experiences at UNSW, not only signal the shift in focus from teacher centred delivery to student centred engagement, but also

18 PART I

ImprovingAssessmentText2Proof.indd 18

11/11/13 3:35 PM

the shift in focus from learning and teaching being largely ‘in-put’ oriented to ‘out-put’ oriented. They also signal the shift in focus from assessing knowledge acquisition to assessing skills development as well, with the process of assessing graduate capabilities in the skills that matter in the professions (e.g. communication skills, interpersonal and intercultural skills, teamwork and emotional intelligence), often posing a real challenge to academics. Students also have the opportunity to enrol in a Diploma of Professional Practice, with courses such as Introduction to the Workplace, and with industry placements. The UNSW equity agenda

Historically, UNSW has taken a lead in addressing the educational aspirations of students from non-traditional and disadvantaged backgrounds beginning as far back as 1952 with the Colombo Plan, when it took students from developing countries in Asia and equipped them with the academic and professional capabilities to contribute to the development of their emerging nations. It has retained that tradition. Relevant to the context of this discussion, UNSW has introduced several initiatives in the form of alternative entry pathways and preparation programs for disadvantaged students from diverse backgrounds. For example, ASPIRE is a UNSW equity initiative (in partnership with primary and secondary schools with disadvantaged cohorts of students) that challenges attitudes to university. UNSW provides workshops, campus visits and learning activities to build a greater awareness of what a university education entails, encourage students to think about options for the future and help them work towards achieving their potential. At the institutional level, we seek not only to increase the number of students from low socioeconomic status backgrounds enrolling at university but also to ensure that they graduate. There is also a well-established UNSW initiative devised in collaboration with the Indigenous Student Centre, Nura Gili, which

International and national contexts in higher education 19

ImprovingAssessmentText2Proof.indd 19

11/11/13 3:35 PM

targets both Indigenous and non-Indigenous students from low socioeconomic and other disadvantaged backgrounds for enrolment in pre-entry programs across several disciplines. Pre-entry programs are available to students aspiring to study in law, business, engineering, social work and medicine. An exceptionally successful initiative at UNSW is in the Faculty of Medicine, which has one of the highest enrolment numbers and progression rates in Australia for Indigenous students studying in the discipline. Each faculty sets its own targets in anticipation of UNSW reaching its government-set target of ensuring the enrolment of 20 per cent of undergraduate students to come from low socioeconomic status backgrounds by 2020. UNSW has a strong commitment to the enrolment and support of students from low socioeconomic and disadvantaged backgrounds and supports the government target. However, it does struggle with the lack of ongoing government funding to ensure the success of initiatives in this area. Programs and initiatives that it has instigated are supported mainly by the goodwill of the community, donations from the corporate sector, alumni fund raising events for scholarships and cross subsiding by UNSW and the shrinking budget of the Deputy Vice Chancellor (Academic) (DVCA). This is hardly a sound and sustainable way to embed support initiatives into the infrastructure of the university but waiting for or anticipating adequate government funding for such initiatives would be pointless. Success at UNSW is not measured by funding acquired or, indeed, enrolment numbers but by graduation numbers. Economic drivers

Like most universities in Australia (and internationally), UNSW is reliant on government funding for the implementation of its teaching and research agendas and for initiatives that it may want to introduce to advance its higher education agenda in a challenging and changing environment. Strategic planning is constrained by changing economic policies, by changes to funds allocated to

20 PART I

ImprovingAssessmentText2Proof.indd 20

11/11/13 3:35 PM

learning, teaching and research, and by the ongoing reduction in levels of funding, forcing the university to rethink its priorities. The link between student enrolments (domestic and international), revenue raising and budgets has led to a change in the mix of students in programs across the disciplines, with the more popular disciplines taking larger numbers of international students. A typical example is the Australian School of Business (the Business Faculty) at UNSW, with approximately 13,000 students (see Chapter 5). It is characterised by large lecture teaching and a very diverse student population, with approximately 30 per cent from international backgrounds. The large student numbers and international mix have implications for teaching and learning. The large intake of international students also cross subsidises other faculties and in so doing helps to balance the UNSW budget. It is important to note, however, that not all decisions are made on the basis of economic issues. For example, at UNSW, there is a strong commitment to ‘internationalisation’ and student diversity on campus. For more than 50 years, at least 10 per cent of the student population has been international students, long before economic drivers were at play. An important factor to note in this context, too, is the role played by the LTPF, referred to earlier in this chapter. It is an indication of how government funding, when available, facilitates innovations in learning and teaching, university initiated development of quality assurance systems (as opposed to government regulated) and improvement in the student learning experience – evident throughout the chapters that follow. The Assessment Project that is the basis for this book was made possible by strategic funds allocated to UNSW as reward funds for its high-ranking performance when measured against student ratings in the CEQ and GDS. In 2007, UNSW was awarded $6.65m, in 2008 $9.5m and in 2009 $6.936m. The allocation of these entire funds to improve the quality of learning and teaching was agreed to by the Vice-Chancellor of the university. It was these funds that enabled the DVCA to

International and national contexts in higher education 21

ImprovingAssessmentText2Proof.indd 21

11/11/13 3:35 PM

share the funds with faculties for the appointment of a learning and teaching fellow in each faculty (full-time for five years), the appointment of a support staff member trained in education technologies in each faculty (part-time) and the implementation of the Assessment Project over a three year period. Without the LTPF, the UNSW Assessment Project would not have been possible. Clearly, the cessation of the LTPF will have an impact on future major innovations at the institutional level and the overall quality improvement initiatives driven by the university. Engagement with technology

Like most institutions in the higher education sector, UNSW is currently engaged in the debates surrounding the wider use of digital technologies. At the strategic planning level, the university is evaluating the benefits and challenges of engaging more widely with the technologies, the implications for program delivery, the cost of infrastructure and priorities in a constrained budget, the training of academics to acquire the appropriate technological capabilities and the balance between potential risks and the need to be technologically engaged or be left behind. Different chapters in the book draw attention to the increasing use of innovative technologies in the delivery of courses and programs and, importantly, in the management of assessment. An example is the use of ReView software in the Australian School of Business (Chapter 5) and the College of Fine Arts (Chapter 7), and the implications for effectiveness in managing assessment and efficiency in reducing the workloads of academics. Yet another example is the use of the learning management system, Moodle, to contribute to effectiveness and efficiency gains in assessment. The Faculties of Science and Engineering combined to develop a product that is described in Chapter 8. Other exciting developments, such as the use of e-portfolios, are emerging with the extended use of digital technologies and are bound to enhance both the quality and diversity of the student learning experience, including assessment. There

22 PART I

ImprovingAssessmentText2Proof.indd 22

11/11/13 3:35 PM

is no doubt, too, that there will be challenges, especially in finding the most appropriate assessment tools for large scale assessment of all the required capabilities embedded in all UNSW programs.

Conclusion Change and development at UNSW, as in most universities nationally and internationally, is being driven by many of the external forces outlined in this chapter: political, economic, social and technological. In seeking to manage the impact of the changes, set new institutional directions, and introduce alternative pedagogical directions and innovations, UNSW benchmarks its performance with peer institutions in Australia and abroad and sets its standards accordingly. To retain its autonomy, it engages in ongoing internal benchmarking and self-regulation with the support of its senior management, division of the DVC (Academic), Academic Board, faculties and academic community. The Assessment Project, so thoroughly discussed in this book, is but one example of institutional level standard setting, quality assurance and quality improvement, realistically presented with accomplishments, gaps and challenges. But, whatever the pathway chosen by UNSW and its individual faculties, the journey has proven that institutional level initiatives have tremendous impact in setting standards across a university such as UNSW and in collaboratively managing the drivers of change.

References Altbach, P.G., Gumport, PJ & Berdahl, RO (2005) American Higher Education in the Twenty-First Century: Social, political and economic challenges, Johns Hopkins University Press, Baltimore. Bradley, D, Noonan, P, Nugent, H & Scales, B (2008) Review of Australian Higher Education: Final report, Department of Education, Employment and Workplace Relations (DEEWR), Canberra. Brookes, M & Beckett, N (2013) Quality Management in Higher Education: A review of international issues and practice, Presentation to British Standards in Action (BSI), UK.

International and national contexts in higher education 23

ImprovingAssessmentText2Proof.indd 23

11/11/13 3:35 PM

Campus Review (2013) ‘Labour slashes university funding’, Campus Review, 15 April. Available from . [Accessed April 2013.] Commonwealth of Australia (2009) Transforming Australia’s Higher Education System, Commonwealth Government, Canberra. Craven, G and Davis, G (2013) ‘Tertiary education agency TEQSA needs singular strategy’, The Australian, 3 July. Department of Education, Employment and Workplace Relations (DEEWR) (2009) Measuring the Socio-economic Status of Higher Education Students, Discussion Paper, Australian Government, Canberra. Ernst & Young Australia (2012) University of the Future: A thousand year old industry on the cusp of profound change, Report prepared by the Ernst & Young Research Team, Australia. Gallagher, M (2010) Drivers of Policy Change: The accountability for quality agenda in higher education, The Group of Eight, Canberra. Gibeling, J (2010) Post Graduate Program Review, Assessment and Accreditation in the United States, Paper presented at the Fourth Annual Strategic Leaders Global Summit: Measuring Quality in Post Graduate Education and Research Training, Council of Graduate Schools and the Group of Eight, Brisbane. Gidley, J, Hampson, G, Wheeler, L & Bereded-Samuel, E (2010) ‘From Access to Success: An integrated approach to quality higher education informed by social inclusion theory and practice’, Higher Education Policy, 23(1): 123–47. Available from . [Accessed August 2013.] Harris, KL & James, R (2010) ‘The Course Experience Questionnaire, Graduate Destination Survey, and Learning and Teaching Performance Fund in Australia’, in Public Policy for Academic Quality, Higher Education Dynamics, DD Dill and M Beerkens (eds), Springer, New York, 30. Hobsons (2012) The Good Universities Guide, Hobsons, Melbourne. Lane, B (2013a) ‘Quality regulator “at the extreme end”’, The Australian, 19 June. — (2013b) ‘Regulator “too rich” for our good: Hilmer’, The Australian, 24 June. Norton, A (2013) Mapping Australian Higher Education, Grattan Institute Report, Melbourne. Quality Assurance Agency (2013) Safeguarding Standards and Improving Quality in UK Higher Education. Available from . [Accessed July 2013.] Ramburuth, P & Hartel, C (2010) ‘Understanding and Meeting the Needs of Students from Low Socioeconomic Status Backgrounds’, Multicultural Education and Technology Journal, 4(3): 153–62. Salmi, J (2009) The Growing Accountability Agenda in Tertiary Education: Progress or mixed blessing?, World Bank Education Working Paper Series, No.16. Universities UK (2012) Futures for Higher Education: Analysing trends, Higher Education Report on ‘Meeting on the Challenges of the 21st Century’, UK Longer Term Strategy Network, London. Available from . [Accessed August 2013.]

24 PART I

ImprovingAssessmentText2Proof.indd 24

11/11/13 3:35 PM

2

The institutional context for change Richard Henry, Stephen Marshall and Prem Ramburuth

The University of New South Wales was incorporated by an Act of the Parliament of New South Wales in Sydney in 1949 to teach and conduct leading research in scientific, technological and professional disciplines. UNSW is the only Australian research intensive university established with this focus, modelled on universities such as MIT in the USA and European technical universities such as the Berlin University of Technology. UNSW is a member of the Group of Eight, which comprises the top eight Australian research-intensive universities. It is also a member of the international university network Universities 21 and the only Australian university that is a member of the elite Global Alliance of Technological Universities, GlobalTech. The university’s aspiration is summarised in its 2011 document B2B Blueprint to Beyond: UNSW strategic intent in the following way: UNSW’s aspiration is to continuously improve our position as a leading research intensive university in the Asia-Pacific region, focusing on contemporary and social issues through defined strengths in professional, scientific and technological fields ... We seek to make a significant contribution to the development of knowledge, to learning and teaching, to our students, and to society. (p. 6)

25

ImprovingAssessmentText2Proof.indd 25

11/11/13 3:35 PM

Figure 2.1  UNSW organisational structure Council Chancellor President & Vice-Chancellor

President Academic Board

Rector UNSW Canberra

Dean Faculty of Arts and Social Sciences

Vice-President & Deputy ViceChancellor (Academic)

Dean Faculty of the Built Environment

Vice-President & Deputy ViceChancellor (Research)

Dean Australian School of Business

Vice-President Finance and Operations

Dean College of Fine Arts

Dean Faculty of Engineering

Vice-President University Services

Dean Faculty of Law

Vice-President Advancement

Dean Faculty of Medicine

Dean Faculty of Science

UNSW has more than 50,000 students, of whom 20–25 per cent are international. There are more than 5000 staff and the annual budget of the university is approximately $1.5 billion. The organisational structure of UNSW is shown in Figure 2.1. The university is governed by a council of 15 members, representing university and community interests, and led by the Chancellor. The President and Vice-Chancellor (VC) is the senior executive officer of the university and reports to the University Council as well as being one of its members. The Academic Board is the principal academic body of the university. The Academic Board advises the VC and council on matters relating to teaching, scholarship and research, and makes decisions on delegation from the council. It provides advice on academic policy, approves courses and programs, furthers and coordinates the work of the faculties and supports teaching, scholarship and research. The Academic Board is led by an elected President, who is also an ex officio member of council. The university’s executive team, led by the President and

26 PART I

ImprovingAssessmentText2Proof.indd 26

11/11/13 3:35 PM

ean ulty of ence

Vice-Chancellor, comprises the five Vice-Presidents, two faculty Deans by rotation and the President of the Academic Board by invitation. The five Vice-Presidents are two senior academics, namely the Vice-President and Deputy Vice-Chancellor (Academic) and the Vice-President and Deputy Vice-Chancellor (Research) and three professional staff Vice-Presidents, namely the Vice-President Finance and Operations, the Vice-President University Services and the Vice-President Advancement. There are eight faculties, each comprising a number of schools. They are the faculties of Arts and Social Sciences, Built Environment, Engineering, Law, Medicine, Science, the Australian School of Business and the College of Fine Arts (Australian School of Business and College of Fine Arts are the names for the Faculties of Business and Fine Arts). In addition, UNSW Canberra offers degrees in Humanities, Business, Science and Engineering. The Deans of the eight faculties and the Rector of UNSW Canberra all report to the President and Vice-Chancellor.

Institutional arrangements for governance and management of learning and teaching Governance and management of learning and teaching at UNSW are shared responsibilities between the Academic Board, the division of the Deputy Vice-Chancellor (Academic) (DVCA), and the faculties. The Committee on Education (COE), a standing committee of the Academic Board, is responsible for assisting the board to oversee standards and for recommending policy related to learning and teaching. It comprises individuals elected to the Academic Board, the Associate Deans (Education) (ADEs) from each faculty and UNSW Canberra, the DVCA, Pro-Vice-Chancellor (Students), Pro-Vice-Chancellor (International), Director of Leaning and Teaching (DLT), University Librarian and other senior managers from the division of the DVCA as ex officio members.

The institutional context for change 27

ImprovingAssessmentText2Proof.indd 27

11/11/13 3:35 PM

Two other important sub-committees of the Academic Board are the Undergraduate Studies Committee and the Postgraduate Coursework Committee. These bodies consider new program proposals in detail before they are considered by the Academic Board. Historically the Board’s emphasis was on scrutinising new proposals rather than on program review. The faculties were expected to monitor and review their degree offerings. This worked reasonably well in the faculties where professional accreditation bodies undertook regular reviews. There was more of an ad hoc approach for some degrees. Accordingly, from early 2012, the VC, on the recommendation of the Academic Board, determined that all coursework and research programs should be reviewed at least every five to seven years. Faculties are responsible for ensuring academic program reviews and report the outcomes to the Academic Board. The DVCA’s portfolio is a broad one. The DVCA is responsible for domestic and international students from recruitment to graduation (the student experience in the broadest sense) and for the needs of academic staff (recruitment, staff development, promotions, and learning and teaching). Senior staff who report to the DVCA include the Pro-Vice-Chancellor (Students), Pro-Vice-Chancellor (International), Director of Learning and Teaching (DLT), University Librarian and Director of Nura Gili (responsible for Indigenous students and studies). At a practical level, the DVCA portfolio is key to the implementation of Academic Board proposals. For example, the PVC (Students) is responsible for a management of proposals and portfolio system (MAPPS). This is an online academic proposal service, allowing staff to develop course and program proposals and submit these online for electronic endorsement. This has replaced paper-based systems for new and revised courses, programs and streams. Similarly, the DLT provides key input to decisions about learning and teaching in the broadest sense, as well as helping in implementation. UNSW has an annual cycle of faculty review of learning and teaching (FRLT). This is coordinated by the DLT, with reports

28 PART I

ImprovingAssessmentText2Proof.indd 28

11/11/13 3:35 PM

from each ADE, which are reviewed by a panel of internal and external academic staff. ADEs have, for many years, been able to read the FRLT reports from all faculties. The DVCA exerts influence in many ways other than via direct reports. In practice the DVCA has many functions similar to a provost. Although the deans and rector report to the ViceChancellor, they all work closely with the DVCA, especially on operational matters and on implementation of strategy. The DVCA also works closely with the President of the Academic Board. They meet regularly, both on a formal and informal basis. Along with the relevant dean, the DVCA sits on all selection committees for the appointment of Heads of School (the organisational unit below faculty). The DVCA approves all promotions of academic staff up to and including associate professor, is chair of the university promotions committee to associate professor and is a member of the university promotions committee to professor, which is chaired by the President and Vice-Chancellor.

Faculty arrangements for governance and management of learning and teaching Organisational arrangements for governance and management of learning and teaching in each faculty involve a combination of committees charged with overseeing the implementation of standards and policy related to learning and teaching (e.g. faculty education committees, faculty assessment review groups), together with individuals such as ADEs and Heads of School. They have the responsibility for developing, implementing and evaluating local strategic responses to institutional and faculty priorities for learning and teaching within these standards and policy frameworks. The ADEs each report to their faculty’s dean and are responsible for the quality, development and oversight of their faculty’s teaching programs. While ADEs carry these responsibilities, they generally do not have authority to direct, manage or deploy resources

The institutional context for change 29

ImprovingAssessmentText2Proof.indd 29

11/11/13 3:35 PM

(financial, physical, or human) to support the development, implementation, evaluation or revision of learning and teaching programs, staff, systems or resources. To fulfil their responsibilities, ADEs exercise leadership largely via persuasion and influence. Since UNSW is a highly federated institution, faculty Deans exercise significant independence and influence in the way faculty resources are deployed. Organisational arrangements for supporting the core business of teaching, research and outreach within each faculty vary significantly from faculty to faculty and indeed from school to school, even within the same faculty. The relative priorities placed on each aspect of a faculty’s work and the resources that are available to the faculty mean that the organisational, physical, technological and administrative infrastructure to support learning and teaching vary considerably between faculties and schools. In some faculties where a strong priority has been placed on developing and maintaining high quality, innovative learning and teaching, significant internal infrastructure involving dedicated leadership, management and enabling positions as well as programs, systems and services to support staff and student learning and teaching have been developed and maintained. In others, where a lesser priority has been placed on this aspect of core business, relatively little dedicated infrastructure to support ongoing review and development of learning and teaching has been created. As a consequence, ADEs and Heads of School in different faculties enjoy vastly different levels of support to engage in systematic, ongoing cycles of evidence-based learning and teaching development.

Central support for learning and teaching development To address this variation, among other things, the university maintains a number of central support units to provide leadership and support to faculties in their efforts to develop and deliver high quality teaching and learning experiences for their students. For

30 PART I

ImprovingAssessmentText2Proof.indd 30

11/11/13 3:35 PM

example, the Learning Centre provides learning support to students. The library supports students and staff to identify, engage with and share content relevant to their learning needs and interests. Library staff also maintain and facilitate a range of programs aimed at further developing students’ academic literacy capabilities. Through the provision of an increasing number of formal and informal learning spaces, the library also facilitates faculties’ efforts to develop students’ capabilities to work both independently and collaboratively in their learning. A central learning and teaching unit (LTU) is maintained to provide expert advice and support to faculties and staff in developing institutional capacity and individual capability for the development of learning, teaching and curriculums. This is achieved through a combination of targeted staff development programs aimed at developing the knowledge, skills and capabilities of the university’s staff in relation to learning and teaching, such as the Foundations of University Learning and Teaching (FULT) program; strategic projects aimed at developing, reviewing and/or revising the university’s infrastructure and resources to support evidence-based learning and teaching development, such as projects to review and develop the instruments used to collect feedback from students on their experience of learning and teaching at UNSW; and the provision of a range of services aimed at developing, maintaining and supporting staff to use the university’s centrally provisioned learning and teaching infrastructure and resources, such as the university’s learning management systems, educational media systems, course and teaching evaluation and improvement (CATEI) system and services. In general, the university’s central support units that support learning and teaching are relatively small for an institution the size of UNSW. However, they have been established with the express intention of complementing and providing support for the work undertaken in faculties to fulfil their learning and teaching development responsibilities, rather than as a supplement or substitute for that work.

The institutional context for change 31

ImprovingAssessmentText2Proof.indd 31

11/11/13 3:35 PM

In addition to these curriculum staff and student focused support units, a number of other university offices and divisions support its learning and teaching mission. For example, IT Services works with each institutional business domain, including the academic domain, to develop and maintain the IT systems and services necessary to enable the range of activities associated with curriculum design and delivery. IT Services manages and administers the university’s identity management systems, student and academic administration systems, learning management systems, and educational media systems. Student and Academic Administration develops and maintains the business processes associated with capturing, maintaining, deploying and reporting the data (e.g. admission, enrolment, timetabling, assessment and examination data) necessary to manage student and staff engagement in learning and teaching. Facilities Management, along with Venues and Events, develops and maintains the physical spaces and audio-visual equipment used to teach and support learning.

UNSW’s approach to managing and coordinating learning and teaching development In keeping with the highly federated nature of the university, UNSW manages and coordinates the core business of the university, including learning and teaching, through the use of key performance targets (KPTs). These KPTs establish for each senior member of the university’s staff their principal goals and priorities for each year. The University Council sets KPTs for the Vice-Chancellor (VC). These are then cascaded down through the senior management team with the executive team, the deans and their direct reports negotiating and agreeing to KPTs relevant to their role and responsibilities. Between 10 and 20 per cent of the salary of these senior staff is contestable, depending upon whether or not the KPTs are achieved and whether or not the individual acts according to the university’s code of conduct. Once a month the

32 PART I

ImprovingAssessmentText2Proof.indd 32

11/11/13 3:35 PM

VC meets with each Dean and the Rector of UNSW Canberra and reviews their progress in realising their agreed KPTs. Twice a year the VC has strategy meetings with each Dean and the Rector of UNSW Canberra that both DVCs (DVCA and DVCR) attend, to discuss their faculty’s strategies for achieving their KPTs and to identify how the divisions of the DVCA and DVCR might assist them to implement their strategies. This approach provides faculties with the independence and flexibility they need to develop and implement strategies in response to institutional aspirations and goals that are appropriate to their context and needs. However, such an approach can be inefficient for the institution as a whole due to the possibility of independent planning and action within faculties leading to duplication. To address this issue, in respect to learning and teaching development, the DVCA and the DLT meet monthly with the ADEs from each faculty to plan, coordinate and share information on the implementation of learning and teaching strategies, and to identify possible areas of collaboration and ways of eliminating duplication of effort. History and experience of change at UNSW

The current VC, Fred Hilmer, started at UNSW in 2006. He recognised that it was important for the university to be clear about what it was and what it wanted to be. Part of this thinking is reflected in the B2B Blueprint to Beyond document, which has had a number of iterations. UNSW brands itself as the university that ‘never stands still’ to reflect our strategic aspiration ‘to continuously improve our position as a leading research intensive university in Australia and a peer in good standing with the best globally, with strong traditions of excellence, innovation and social justice’ (The University of New South Wales, 2011, p.3). This is not only a key feature of our brand but also an accurate characterisation of the nature of the university since its establishment. When the present VC arrived in 2006, he felt that UNSW was performing well in teaching. Excellent foundation work had

The institutional context for change 33

ImprovingAssessmentText2Proof.indd 33

11/11/13 3:35 PM

been undertaken in previous years and this was built on. As mentioned in Chapter 1, the Australian Government had established a Learning and Teaching Performance Fund (LTPF) and allocated reward funding to universities on the basis of superior performance in result of a survey of graduates about their experiences as students (a course evaluation questionnaire) and a survey of outcomes, such as rate of employment and starting salaries (a graduation destination survey). Initially the results for UNSW were disappointing but the focus on teaching saw UNSW consistently ranked in the top tier of Australian universities. The VC also agreed that the LTPF money received by UNSW for its superior performance be quarantined for strategic support of learning and teaching initiatives. The intended message was to continue to perform at the highest level in the teaching and learning area. On the other hand, the VC felt that UNSW was underperforming in research and saw the need to raise performance in this area. He raised staff and community awareness of the need for the institution to focus more on its research efforts, to develop the research capability of its staff, to increase competitiveness in applications for external funding, to significantly lift the number and quality of our research outputs including both publications and higher degree research (HDR) students and to improve our research infrastructure. A range of strategies, including the development and implementation of a range of incentives for higher quality research performance, indicators and systems to monitor research productivity, and a focus on research productivity, quality and impact in annual performance development discussions between staff and their supervisors have enabled the university to successfully achieve this goal. The Australian Government, via the Australian Research Council, administers Excellence in Research for Australia (ERA). ERA evaluates the quality of the research undertaken in Australian universities against national and international benchmarks. The first full round of ERA was undertaken in 2010 and the results published in 2011. UNSW was rated ‘at, above or well above’ world standards in all broad

34 PART I

ImprovingAssessmentText2Proof.indd 34

11/11/13 3:35 PM

fields of research. It was one of only four Australian universities to achieve this rating and was the state’s top ranked university. UNSW was ranked fifty-second in the 2012 QS World University Rankings and eighty-fifth in the 2012–2013 Times Higher Education World University rankings. Although both these indices purport to measure teaching and research performance, research drivers seem to be the more important ones. Some staff at UNSW perceived that the focus on research was at the expense of teaching. It was a common myth that the VC was only interested in research. In the Australian system, research funding from the major funding agencies, the Australian Research Council and the National Health and Medical Research Council, is never sufficient to fund the true cost of the research. Successful research grants are the ‘gift that keeps on taking’. Inevitably teaching subsidises research. So, UNSW’s success in lifting institutional research performance was accompanied by the need to grow institutional revenue to fund the gap between the extra research income and the real costs of conducting the research. The university took advantage of the federal government loosening the caps on the numbers of domestic students each university could enrol and on its popularity with international students, growing its student population from 28,400 in 2005 to 37,000 in 2010. The combined pressures of increased teaching loads and research expectations led to growing concern among staff about unsustainable academic workloads and a perception that effort put into research was more valued than effort put into teaching. The genesis of UNSW’s Assessment Project

At a meeting of the university’s executive in 2010, the VC raised the issue of academic workloads, an issue of growing concern to many staff, and flagged his intention to explore ways of reducing workloads. In subsequent discussions between the DVCA and the DLT, it was agreed that the assessment of students’ work constitutes

The institutional context for change 35

ImprovingAssessmentText2Proof.indd 35

11/11/13 3:35 PM

Figure 2.2  Scenario 1: Assessment of  learning Learning objectives

Assessment strategy 1

Student workload (hrs)

Staff marking load (hrs)

Learn

Students will understand the scientific concepts that underpin the diverse theories pertaining to climate change

T 1 – Mid-term knowledge and understanding test

Study for test Sit test

5 hrs  1 hr

Design test  2 hrs Assess and grade 100 students x 15 min 25 hrs

T2 – Article critique (750 words)

Read article Write critique

2 hrs 6 hrs

Students will be able to effectively critique the strengths and limitations of climate change arguments from an evidencebased approach

T3 – Essay (2500 words) comparing two different perspectives on climate change, identifying the strengths and limitations of each and using evidence to justify the positions put forward. Essay must conclude with a personal, evidencebased opinion

Research essay Write essay

Locate and select articles 4 hrs Assess and provide feedback on reviews 100 students x 20 min 33.3 hrs

Stude under scien under theor clima

T4 – Contributions to tutorials and a group blog

Posts to the blog 12 hrs

Read and assess blogs 100 students x 15 min 25 hrs

T5 – Examination (2 hrs)

Revision for exam 12 hrs

Develop exam 4 hrs Mark exam/record 100 students x 30 min 50 hrs

Students will be able to formulate a personal view of the climate change debate using evidence Students will be able to effectively employ technology to communicate their understanding and opinions Students will be able to work effectively in a team

12hrs 12 hrs

Design assessment rubric 1hr Assess, grade and provide feedback 100 essays x 40 min 66.7 hrs

TOTAL:

Approx 62 hrs

TOTAL:

Approx 211 hrs

a significant proportion of the teaching workload of an academic and that by decreasing the time that academic staff devote to assessment, a meaningful reduction in overall teaching load might be achieved. This idea was deemed to have merit by the VC, who set the DVCA the task of improving the efficiency of student assessment while maintaining or improving the quality and effectiveness of assessment. At a forum in March 2010 attended by Heads of School and the Vice Chancellor’s Advisory Committee (including faculty Deans and other senior executives), the DLT made a presentation

36 PART I

ImprovingAssessmentText2Proof.indd 36

11/11/13 3:35 PM

Stude to effe the st limita chang from appro

Stude formu view o chang evide

Stude able t emplo comm under opinio

Stude to wo team

Figure 2.3  Scenario 2: Assessment for learning Learning objectives

Assessment strategy 1

Student workload (hrs)

Students will understand the scientific concepts that underpin the diverse theories pertaining to climate change

T1 – Each tutorial class is divided into four groups. Each group reviews a number of researchbased arguments regarding a particular aspect, feature or location of climate change and compares these with media reports and popular media treatments

Research Group meetings

5 hrs  5 hrs

Identify and develop resources 12 hrs

T2 – Students in each group are asked to (a) synthesise their learning from this research and communicate it in a 14-minute presentation to the class; and (b) collectively prepare a website designed to educate a specific group in society about climate change. The site should indicate the contributions made by each member of the team

Prepare class presentation

3 hrs

Assess class presentation 20 pres x 30 min

10 hrs

Assess web sites 20 ws x 30 min 100 stud x 15 min

10 hrs 25 hrs

T3 – Students individually prepare a 750-word personal learning statement on climate change with reference to the literature

Develop personal learning statement 3 hrs

Assess and provide feedback on PLS 100 stud x 15 min 25 hrs

TOTAL:

TOTAL:

Students will be able to effectively critique the strengths and limitations of climate change arguments from an evidence-based approach Students will be able to formulate a personal view of the climate change debate using evidence Students will be able to effectively employ technology to communicate their understanding and opinions Students will be able to work effectively in a team

Design website

12 hrs

Approx 28 hrs

(Savings of 34 hours)

Staff marking load (hrs)

Approx 82 hrs

(Savings of 129 hours)

to illustrate how this might be achieved. Underpinning the approach was a shift in the place of assessment in student learning from what might be described as ‘assessment of learning’ where assessment is typically positioned independently of and subsequent to designated learning activities, to ‘assessment as learning’ where assessment tasks form the bases of the learning activities in which students engage. Two scenarios were used to illustrate how the learning outcomes of a hypothetical course might be developed

The institutional context for change 37

ImprovingAssessmentText2Proof.indd 37

11/11/13 3:35 PM

and assessed. In the first scenario (see Figure 2.2), teaching and assessment were conventional, with assessment following on from a series of designated learning activities. In the second scenario (see Figure 2.3), learning activities doubled as assessment tasks and integrated features such as group work, peer review and the use of standards-based assessment rubrics. Each scenario modelled what the typical workload implications might be for both staff and students. This was used as a basis for the argument that a net workload benefit could be gained from the adoption of an ‘assessment as learning’ approach to curriculum design. Response to the DLT suggestion that there was potential benefit to academics’ workload in implementing such an approach to assessment across the university was polite but unenthusiastic. However, both the VC and the DVCA were convinced and the DVCA became a champion for a whole of institution educational change project aimed at ‘improving the efficiency while maintaining the quality of assessment’. Clearly this would be a major challenge. The real work needed to occur in the faculties. The ADEs were in similar positions but were more accustomed to managing learning and teaching than leading major change. Deans and Heads of School did not want to lose the research momentum that had been achieved. The quality of student intake had never been higher and student demand was very strong. Was there really a need for change? Would there be polite and respectful conversations between Deans and the DVCA but no real progress on the Assessment Project? The DVCA and the DLT needed a clear strategy.

Reference The University of New South Wales (2011) B2B Blueprint to Beyond: UNSW strategic intent. Available from . [Accessed April 2013.]

38 PART I

ImprovingAssessmentText2Proof.indd 38

11/11/13 3:35 PM

3

The UNSW approach to improving assessment Stephen Marshall, Richard Henry and Prem Ramburuth Rationale A number of factors influenced the university’s approach to improving efficiency and effectiveness of assessment as a means of reducing academic workload. These included a range of assumptions about the nature of the changes required; the theories of change that underpinned the planning and decision making of key stakeholders involved in determining the university’s approach, including the Deputy Vice-Chancellor (Academic) (DVCA), Director of Learning and Teaching (DLT) and Associate Deans of Education (ADEs); and the nature of the contexts in which these changes were to be effected.

Assumptions Central among the assumptions underpinning UNSW’s approach were beliefs that improving efficiency and effectiveness of assessment may require: ʶʶ program and course redesign ʶʶ revision of existing assessment tasks and resources ʶʶ development of new assessment tasks and resources

39

ImprovingAssessmentText2Proof.indd 39

11/11/13 3:35 PM

ʶʶ integration of revised and/or new assessment tasks and resources into learning and teaching ʶʶ (re)development and deployment of new and/or revised tools and technologies to support assessment ʶʶ development and implementation of new policies, procedures and guidelines for assessment ʶʶ provision of professional development to enable staff to develop the knowledge, skills and capabilities necessary to improve the efficiency and effectiveness of assessment ʶʶ strong education leadership to advocate for, sponsor and lead the change process required at the institutional, faculty/school and program/course levels. The expectations of the President and VC were that, as far as possible, reductions in academic workload would be achieved across all faculties; that a shift in the design of assessment from ‘assessment of learning’ to ‘assessment as learning’ would realise the improvements in efficiency and effectiveness of assessment required; and that not all assessment practices throughout the university would need to change, only those that were inefficient or ineffective. On this basis, the DVCA and the DLT planned a whole-of-institution, multi-year, multi-focal, evidence based, contingent approach to achieve these objectives. A whole-of-institution approach

A whole-of-institution approach was deemed to be necessary not just to ensure that the intended outcomes of the project could be achieved in all faculties, but to create the capacity for effective organisational learning (Senge, 1990), whereby benefits realised in one part of the institution might be realised in others through ongoing sharing of information about local strategies and approaches. Given the highly federated nature of UNSW, most of the institution’s previous efforts at educational innovation had taken place

40 PART I

ImprovingAssessmentText2Proof.indd 40

11/11/13 3:35 PM

within individual faculties. Program and course review and revision had typically been a process undertaken within an individual faculty or between a small number of faculties, usually two, where dual award programs were involved. Such approaches to review and innovation have meant that the insights and benefits gained through these processes remain local, with little, if any, impact being felt more widely throughout the institution. Given the goal of improving the efficiency and effectiveness of assessment practices throughout the entire university, adopting an approach to innovation and change that made learning and teaching development a public rather than private activity was deemed to be important. Thus our approach was specifically designed to facilitate sharing and learning throughout the institution as a whole. A multi-year approach

A multi-year approach was deemed to be necessary to ensure that sufficient time was available to undertake the many tasks required to initiate, implement and institutionalise the changes in assessment practice required, at the multiple levels of the institution (Fullan, 2003). For example, at the institutional level, it was expected that time would need to be available to review, and where necessary revise, the organisational, administrative and technical infrastructures that enable or support assessment practices throughout the institution (e.g. to review and revise assessment policy and procedures, support for staff development related to assessment, the IT systems enabling online assessment or the management of grades). At faculty and school levels it was anticipated that time would be required for faculties and schools to determine firstly, what their response to this challenge from the VC would be and secondly, how they would realise this response. Faculties and schools would need time to establish the organisational arrangements necessary to complete the work and to deploy the staff and resources required. In some faculties and schools where existing structures and processes to support curriculum innovation and change were

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 41

41

11/11/13 3:35 PM

well established, the time required to do this was expected to be short. In others, where little organisational or administrative infrastructure was maintained to support this type of innovation, this would require re-prioritisation and possibly re-deployment of scarce resources, a process that would involve lengthy discussion and debate. Time would be required to enable these discussions to take place. At the program and course levels, time would be required for staff to review their assessment practices; to determine the ways in which these practices might be changed to improve their efficiency and effectiveness; and to design, develop and integrate any new resources, tasks or activities to enable and/or support their efforts into their programs and courses. The provision of staff development and other resources to help individuals to re-conceptualise assessment and explore the ways in which their practices might change was also deemed to be critical to change at this level. Thus, time to plan, develop, implement and disseminate a range of staff development opportunities and resources to support the necessary conceptual change was also deemed to be necessary. A multi-focal approach

A multi-focal approach was thought to be required given the wide range of factors that influence the design, development, implementation and evaluation of assessment, including as Marshall et al. (2011) have observed (see Figure 3.1), the design of curriculums and pedagogy; staff knowledge, skills and capabilities in relation to assessment; the role(s) students can play in assessment; the organisational, administrative and technological environments in which assessment is practiced; and the nature and process of educational change itself. According to Fullan (2003), there are both objective and subjective dimensions to educational change that must be addressed if efforts to innovate and change educational practices are to be

42 PART I

ImprovingAssessmentText2Proof.indd 42

11/11/13 3:35 PM

Figure 3.1  Focuses for change in reforming assessment practice

Adapted from: SJ Marshall, J Orrell, A Cameron, A Bosanquet and S Thomas (2011) ‘Leading and Managing Learning and Teaching in Higher Education’, Higher Education Research & Development, 30(2): 87–103.

source

successful. ‘What’ is changed is important but equally as important is ‘what the change means’ to those who are expected to change their practices. The subjective nature of change demands that as much effort be placed in assisting staff and students to develop their understanding of how and why their practices need to and can change (i.e. on leadership of the change) as is placed on changing the structures, systems, technologies and business processes associated with the area of practice that is the focus for change (i.e. on managing the change). In our efforts to reduce academic workload and improve the efficiency and effectiveness of our assessment practices, there were clearly a number of tangible, objective elements of our current

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 43

43

11/11/13 3:35 PM

learning and teaching practice and infrastructure that would need to be addressed. Assessment is an integral part of teaching and learning. It represents a significant proportion of the learning and teaching activity associated with any program or course. The role of assessment in learning and teaching is reflected in the design of the curriculum associated with a program or course. Therefore, efforts to improve the efficiency and effectiveness of assessment necessarily involve the review and revision of curriculums – their design, inherent pedagogies, learning activities and resources. How assessment is practised is as much a function of the organisational context in which it is practised as it is of curriculum and pedagogical design. Organisational context and infrastructure not only enable assessment but also determine what is possible and what is not. As Bolman and Deal (2003) have observed, a number of different lenses can be used to examine the impact of different aspects of organisation context on efforts to effect educational change. These include: ʶʶ A structural lens that brings attention to the impact of organisational roles, relationships, business processes, administrative systems and services, technologies and spaces (both real and virtual) on efforts to effect educational change. ʶʶ A human lens that alerts us to the impact of the knowledge, skills, capabilities, needs and interests of those involved on the process and outcomes of educational change. ʶʶ A political lens that alerts us to the influence that different approaches to the exercise of power and authority within an organisation can have on both the process and outcomes of educational change. ʶʶ A cultural lens that alerts us to the influence of taken-forgranted normative values, beliefs and interests on educational change. Through the application of these lenses to an analysis of the

44 PART I

ImprovingAssessmentText2Proof.indd 44

11/11/13 3:35 PM

organisational context in which we were attempting to improve the efficiency and effectiveness of assessment practices at UNSW, it became apparent that our efforts may need to focus on reviewing and revising the following tangible and objective elements: ʶʶ curriculum (learning outcomes, assessment tasks, grading structures, feedback mechanisms and the like) ʶʶ governance and management of learning and teaching (including the roles, responsibilities and business processes of all key stakeholders an committees involved in the assurance and improvement of efficiency and effectiveness of assessment) ʶʶ administrative processes and infrastructure for assessment (including UNSW’s assessment policy and procedures and its arrangements for examinations) ʶʶ IT systems and services to support assessment (including the appropriateness and effectiveness of its learning management systems; student and academic administrative systems; and applications to support design, development, implementation and evaluation of assessment processes) ʶʶ real and virtual spaces to undertake assessment (including UNSW’s lecture theatres, tutorial rooms and other spaces used for assessment and examination purposes) as well as the intangible and subjective elements associated with: ʶʶ staff knowledge, skills and capabilities (including those associated with the design, development, implementation, evaluation and revision of assessment practices) ʶʶ the exercise of power and authority in relation to assessment and educational change (including the ways in which those in formal positions of responsibility for assessment – DVCA, President of the Academic Board, DLT, PVCs, Deans, ADEs, HOS (Heads of School), program directors, course convenors – exercise their power and authority) ʶʶ organisational culture that influences assessment processes and attempts to change them (including taken-for-granted notions

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 45

45

11/11/13 3:35 PM

of teaching, assessment, grading and student and staff roles in relation to assessment). Further, efforts to improve the efficiency and effectiveness of assessment practice by reforming any one of these factors in the absence of a consideration of the impact on, or change required in, any of the others, was deemed to be a deficient approach and one that was unlikely to lead to the desired improvements in efficiency and effectiveness of assessment. Thus, a multi-focal approach to change that remained alert to the potential impact of each of these factors on the nature and process of the change that we were attempting was thought to be essential. An evidence-based approach

As a research intensive university that places great value on rigour, scholarship and evidence-based practice, the university determined that whatever actions were taken to reform its assessment practices needed to be informed by evidence of the need to change, scholarship on how to design and implement efficient and effective assessment, and evaluative data collected and analysed regularly to determine whether our efforts were indeed realising our goals. It was believed that staff would be more likely to actively engage in efforts to reform assessment practices if they were provided with evidence that their own practices needed to be reformed, the scholarship necessary to guide and substantiate proposed changes, and the data to show that their efforts to reform their assessment practices were indeed achieving their desired outcomes. Consequently, the university determined that it would encourage and adopt a critical reflective practice approach to improving efficiency and effectiveness of assessment (see Figure 3.2). This would involve: ʶʶ mapping and describing its current assessment practices ʶʶ critically analysing these practices with a view to determining

46 PART I

ImprovingAssessmentText2Proof.indd 46

11/11/13 3:35 PM

ʶʶ

ʶʶ ʶʶ

ʶʶ ʶʶ

the assumptions about teaching, learning and assessment inherent within them evaluating and confronting these assumptions and approaches to assessment to determine whether they were consistent with the efficient and effective ‘assessment as learning’ approach the university desired reconceptualising its approaches to assessment planning and determining how the university would change its current assessment practices to ensure that they more appropriately reflected these reconceptualisations implementing these proposed changes monitoring the impact and outcomes of these changes to ensure that they aligned with the intended outcomes and, where they did not, taking corrective action to ensure that the desired outcomes were institutionalised.

Figure 3.2  UNSW’s approach to assessment reform

Map and DESCRIBE current practice

Determine the assumptions that INFORM current practice

Evaluate and CONFRONT these assumptions in light of desired outcomes

INSTITUTIONALISE desired change

RECONCEPTUALISE practice in accord with desired outcomes

IMPLEMENT desired change

Plan and INITIATE change

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 47

47

11/11/13 3:35 PM

The need for the first four phases of this approach was based on Smyth’s (1986) argument that in order to fully engage in critical reflective practice it is necessary to name that practice, to identify the assumptions informing that practice and to challenge and recast these assumptions so as to develop the foundations on which practice can be reconceptualised. The final three phases of this approach recognised Fullan’s position that ‘change is a process, not an event’ (2003, p.49); it is comprised of a combination of actions, not necessarily conducted in a linear and sequential way, aimed at initiating the change, implementing the change and ensuring its institutionalisation or continuation.

Contexts for change Bolman and Deal (2003) have observed that in all organisations business processes and practices evolve to reflect their normative philosophies, values and beliefs. In highly federated organisations like universities, different business processes and practices evolve in each faculty, sometimes even each school, to reflect the differing philosophies, values and beliefs that are embraced by the disciplines that comprise them. As Becher and Trowler (2001) have noted, the nature of intellectual enquiry and the cultures of different disciplines have meant that an array of ‘academic tribes and territories’ have emerged within higher education institutions. UNSW is no exception. Two areas of practice in which these cultural differences between disciplines are evident are assessment and faculty/school based approaches to educational change or improvement. In faculties such as medicine, business and engineering, where the quality of academic programs and students’ achievement of specified learning outcomes have long been the subject of close scrutiny by external accreditation agencies, formal systematic processes for reviewing the nature, quality and impact of assessment practices on student learning have, to various extents, been integrated into the everyday

48 PART I

ImprovingAssessmentText2Proof.indd 48

11/11/13 3:35 PM

business processes of the faculties or schools concerned. However, in faculties and disciplines where there has been no regular, formal, external requirement for review of assessment practices to occur, relatively little systematic review of assessment, beyond monitoring the outcomes of local assessment practices on the distribution of students’ grades at the end of each assessment period, has taken place. As a result, considerable variation exists between faculties and schools at UNSW in relation to both their institutional capacity and their staff capability to critically review and revise their assessment practices. Consequently, in determining how the university might improve the efficiency and effectiveness of assessment, it was agreed that ‘one approach would not fit all’ and that these differences in capacity and capability would need to be accommodated. Faculties and schools would need to be given the opportunity to interpret this challenge for themselves, to identify where their current assessment practices might be improved, to determine how they would go about improving assessment, and to take whatever action was necessary to implement their strategy and report on their progress.

Instruments used to effect change To create such an environment for change, the DLT – with the support of the DVCA – determined that a range of different policy instruments would need to be deployed. These included: ʶʶ mandates ʶʶ inducements ʶʶ dissemination of information ʶʶ capacity-building strategies ʶʶ systems-changing strategies. How and why each of these particular policy instruments were used is discussed briefly below.

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 49

49

11/11/13 3:35 PM

Mandates

McDonnell and Elmore define mandates as ‘rules governing the actions of individuals and agencies’ (1987, p.138); Firestone and Corbett suggest that they are ‘rules or regulations that specify what shall or shall not be done’ (1988, p.325). Mandates are introduced to create uniformity of behaviour or at least to reduce variation in the behaviour of individuals or groups to some tolerable level (McDonnell and Elmore, 1987). At times mandates may take the form of formal legislation or policy, at other times the key goals or outcomes of strategic institutional projects and/or the key performance targets (KPTs) set for institutional executives and their staff. UNSW’s President and VC intended all faculties to address concerns regarding increasing academic workload and given that there are a myriad possible ways in which different individuals or groups might approach this problem, we believed that it would be necessary to place some clear parameters around what was required and how it needed to be achieved. To this end, a threeyear, whole-of-institution project was established, in which all faculties were expected to develop and implement measures to reduce academic workload by improving the efficiency and/or the effectiveness of their assessment practices. For each of the three years 2010, 2011 and 2012, the university’s goals were written to reflect this aspiration. This left no doubt that the University Council, whose responsibility it is to approve the university’s goals on an annual basis, was holding the President and VC accountable for developing and implementing a strategy to meet this goal. While neither the VC nor his executive team nor faculty Deans were expected to personally produce material that would be used to assess student learning, the goal was set to ensure that there were regular conversations between the VC and the Deans, and between the VC and the DVCA, about progress towards this goal. To further ensure this common focus among faculties on improving efficiency and effectiveness of assessment, the VC

50 PART I

ImprovingAssessmentText2Proof.indd 50

11/11/13 3:35 PM

included ‘improving the quality and efficiency of student assessment’ in the KPTs of the DVCA and the deans in both 2011 and 2012. This KPT was subsequently set within faculties for the ADE, Heads of School, and other staff as appropriate. The DVCA ensured that for each of these years, one KPT for the DLT was ‘improving quality and efficiency of student assessment and feedback across programs and within courses’. Inducements

Inducements are ‘transfers of money to individuals or agencies in return for the production of goods and services’ (McDonnell and Elmore, 1987, p.138). They provide financial incentives to help individuals and/or groups change, or to help them continue a practice that is of value to both them and the individual or agency offering the incentive. ‘Inducements assume that individuals and agencies vary in their ability to produce things of value and that the transfer of money is one way to elicit performance’ (ibid., p.139). Given the wide variation in faculties’ capacity to respond to the university’s goal of improving the efficiency and effectiveness of assessment described earlier, the DVCA and the DLT agreed that the provision of one or more inducements to encourage faculties and their staff to engage in the ‘Assessment Project’, as this UNSW initiative had become known, would be essential if the university were to realise broadly, throughout the institution, the desired improvements in academic workload and in the quality and efficiency of student assessment. To this end, the DVCA agreed to continue to provide faculties with funding to maintain the position of learning and teaching fellow, which had been created in each faculty in 2008 to support learning and teaching development, with funds from the Commonwealth Government’s Learning and Teaching Performance Fund (LTPF). To receive Assessment Project funds, however, faculties were required to ensure that the work undertaken by their learning and teaching fellow during the period of the project was

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 51

51

11/11/13 3:35 PM

targeted at helping the faculty develop and implement its strategy for improving the efficiency and effectiveness of student assessment. In addition to this conditional salary supplementation, the DVCA and the DLT provided faculties with the opportunity to gain access to additional strategic LTPF funds of $100,000 per year in 2010, 2011 and 2012 to support the implementation of their local strategies to improve the efficiency and effectiveness of assessment practices. To be eligible for these funds, faculties were required to: ʶʶ participate in the Faculty Review of Learning and Teaching (FRLT) process, which involved them in annual cycles of: »» describing their quality assurance and improvement processes for learning and teaching »» reviewing their performance against the university’s core learning and teaching indicators »» reviewing their achievements against the goals and outcomes included in their previous year’s Faculty Learning and Teaching Enhancement Plan »» developing their Faculty Learning and Teaching Enhancement Plan for the next 12 months in accordance with both UNSW and faculty priorities for learning and teaching enhancement ʶʶ report the outcomes and acquit the budgets of any projects supported by previous rounds of LTPF funding ʶʶ submit applications for funding for new projects aimed at improving the efficiency and effectiveness of student assessment. For individuals, a further inducement was manifest in the form of the contestable performance bonus attached to the salaries of those senior university staff who were expected to lead and manage the change process that would deliver the improvement in academic workload desired.

52 PART I

ImprovingAssessmentText2Proof.indd 52

11/11/13 3:35 PM

Because inducements are conditional grants of money, they are often accompanied by rules of the type described above to ensure that the money involved is used consistently with the policy maker’s intent (McDonnell and Elmore, 1987). These rules create oversight costs to the individual or agency providing the incentive as well as to those responsible for implementing the program, policy or change required. In the case of the UNSW Assessment Project, the costs of monitoring and assuring compliance with the terms of these inducements were included in the annual costs of maintaining the FRLT process and reviewing the performance of senior staff as part of the annual performance review process. Dissemination of information

In recognition that improving the efficiency and effectiveness of assessment practice throughout the university would not simply be a matter of changing the ‘objective’ or ‘tangible’ dimensions of assessment (e.g. the existing assessment policy and procedures, current assessment tasks and feedback mechanisms, or the IT applications and systems that were used to enable and administer assessment) but would also require changing the hearts and minds of those responsible for defining, developing and implementing assessment, the DLT in collaboration with the ADEs from each faculty and UNSW Canberra, committed themselves to developing and maintaining as many avenues of communication as possible to disseminate and engage the university community with scholarship, data and ideas regarding current and alternative approaches to assessment. In the absence of information that spoke directly to staff about the need to change their assessment practices and the potential benefits of these changes to themselves and their students, staff were unlikely to engage with, or commit to, realising the Assessment Project’s desired outcomes. Further, in the absence of knowledge regarding alternative approaches to assessment and of opportunities to critically reflect on their current practices in light of

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 53

53

11/11/13 3:35 PM

these alternative conceptualisations, staff were unlikely to have the knowledge, skills or capabilities necessary to effect the changes that the Assessment Project required. As Firestone and Corbett have observed, ‘educators are willing to take steps necessary to improve education but, without [such a] dissemination program, they lack the knowledge necessary to make the requisite changes’ (1988, p.326). To this end, in consultation with the ADEs and in response to the needs and feedback from each faculty, the DLT planned, developed and maintained a multi-faceted dissemination program. This program comprised: ʶʶ regular briefings and discussions regarding current and future assessment practices at UNSW with: »» the Vice-Chancellor’s Advisory Committee (VCAC), comprised of the executive team, Deans and Pro-ViceChancellors »» Heads of School as part of Heads of School forums convened by the President and VC four times a year, and in the bi-annual Heads of School forums convened by the Learning and Teaching Unit (LTU) each semester »» Associate Deans Education (ADEs) in their regular monthly meetings with the DVCA and the DLT »» faculty Learning and Teaching Fellows – in their fortnightly meetings, they were to discuss their faculty’s strategies for improving the efficiency and effectiveness of assessment; their roles in the development and implementation of these strategies; any issues associated with the implementation of these strategies, together with possible solutions ʶʶ biennial university-wide learning and teaching forums open to all UNSW staff on aspects of assessment identified by faculties, which involved: »» an Australian or world expert on assessment as keynote speaker

54 PART I

ImprovingAssessmentText2Proof.indd 54

11/11/13 3:35 PM

»» presentations of research papers from UNSW staff who were actively leading their faculty or school in the improvement of one or more aspects of their current assessment practices »» poster presentations by up to 25 staff from across all faculties, sharing particular innovations they had made to improve the efficiency or effectiveness of their assessment practices »» round-table discussions of a particular topic of interest (e.g. the assessment of graduate attributes and designing assessment as learning) ʶʶ the development and dissemination of just-in-time web-based resources that faculties, schools or individuals might use to review and revise their assessment practices including: »» information and tools that could be used to audit assessment practice »» information concerning different theories of or approaches to assessment »» principles of good practice in the design of assessment as learning »» guidelines for developing effective and efficient assessment tasks »» guidelines on effective and efficient ways to provide and engage students with useful feedback »» information on effective and efficient approaches to moderation and grading »» examples and case studies of each of the above as used in a variety of different disciplines. ʶʶ the development and delivery of a wide range of professional development activities for staff on various aspects of designing, developing, implementing and evaluating assessment as identified from time to time by faculties, schools and other key stakeholders. These included: »» weekly seminars or workshops on different aspects of

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 55

55

11/11/13 3:35 PM

»»

»»

»»

»»

»»

assessment and other related matters (e.g. those hosted or facilitated by LTU – The Connection Series) monthly meetings of special interest groups or communities of practice to disseminate and share practice, and provide support for members in their work related to assessment renewal (e.g. the UNFED group that met monthly to discuss how new and emerging technologies might be used to improve the efficiency and effectiveness of student assessment) multiple offerings each year of the Foundations of University Learning and Teaching program that emphasised designing curriculums and pedagogies for assessment as learning courses within the Graduate Certificate in University Learning and Teaching that focused on designing efficient and effective approaches to assessment as learning supervision of staff enrolled in the MPhilHE to undertake research into issues associated with efficient and effective student assessment a host of just-in-time faculty-, school- or program-based workshops and seminars on topics related directly to the faculty, school or program’s strategy for improving the efficiency and effectiveness of assessment.

Like inducements, dissemination programs of this type come with considerable costs. However, in the absence of the knowledge, skills and capabilities developed through such a program of dissemination, the goal of reducing academic workload by improving the efficiency and effectiveness of assessment was unlikely to be realised, as any effort to effect change would undoubtedly be devised within the conceptual framework of the assumptions underpinning current practice. The possibility of ‘real’, or ‘second order’ change as Fullan (2003) describes it, would only be realised once those responsible for effecting the change had developed the knowledge,

56 PART I

ImprovingAssessmentText2Proof.indd 56

11/11/13 3:35 PM

skills and capabilities necessary to provide effective critique of, and to reconstruct, their assumptions, as a basis for reconceptualising and designing new approaches to assessment. Capacity-building strategies

A major challenge faced by any change agent is that of sustainability. The value of any process of change can largely be measured by the extent to which the change is institutionalised, or woven into the fabric of everyday practice. To achieve such an outcome, investment needs to be made during the process of change to develop the institutional infrastructure (organisational, administrative, technological) necessary to sustain the change beyond the life of the project through which it has been realised. As McDonnell and Elmore (1987) have observed, capacity building involves investment for the purpose of realising future benefits. In the absence of such capacity building, benefits that may be realised through changes supported by strategic project funding may be lost to an institution in the longer term. Once project funding ceases, capacity to sustain the changes and thus maintain their benefits may also come to an end. With this in mind, the DLT, with support from the DVCA and the ADEs in each faculty, advocated strongly for significant development of various aspects of UNSW’s infrastructure to ensure that the university developed a sustainable capacity to design, develop and implement efficient and effective assessment as a means of managing academic workload. The focuses of these developments included UNSW’s capacity for effective governance, leadership, facilitation and administration of assessment.

Governance of assessment Critical to the sustainability of efficient and effective assessment practices is the need to monitor, and where necessary correct, the design of assessment. But how is this done? In theory, at UNSW, as in most self-accrediting institutions, it is the responsibility of

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 57

57

11/11/13 3:35 PM

the Academic Board to monitor and oversee matters of quality of assessment design as part of its broader responsibilities for assuring the quality of the university’s academic program. However, in practice, responsibility for monitoring assessment design is delegated: 1 to the undergraduate and postgraduate committees of the Academic Board, where these responsibilities are largely only exercised when new programs and/or courses are proposed 2 to faculty committees such as the Faculty Assessment Review Group (FARG) where the focus of business is typically on the outcomes of assessment, anomalies in the distribution of grades, and the like 3 to school assessment review groups, where the focus of business is similar to that of the FARGs 4 to individual course convenors. As a result, there is a considerable risk that over time, as incremental changes are made with each implementation of a course, the associated assessment program will become less efficient and effective in fulfilling its designated role in the curriculum and in the management of staff workload. To ensure that assessment design remains focused on maximising efficiency and effectiveness, and thus acting as a key contributor to the management of academic workload, the DLT worked in collaboration with senior members of the university responsible for quality assurance of academic programs – namely the President of the Academic Board, the Pro-Vice-Chancellor (Students), the DVCA and the Director of the Internal Audit Office – to review and revise the policy, procedures and guidelines for academic program review with a view to ensuring that quality of assessment design was included as one of the focuses in the university’s regular cycle of academic program reviews. One of the main reasons why monitoring of assessment practices had not been a routine part of academic program reviews in all faculties had been the lack of appropriate and accessible data

58 PART I

ImprovingAssessmentText2Proof.indd 58

11/11/13 3:35 PM

to enable it to be incorporated in an efficient way. Most efforts in this regard had been paper based and extremely labour intensive. Further, efforts to improve assessment practices via the formal approval process were often abandoned due to the cumbersome nature of the change approval process. This often led to changes being made to assessment practices without them being officially approved or recorded in the faculty and/or university’s systems. To address this issue, the DLT and ADEs contributed to a project established within the Pro-Vice-Chancellor (Students) office to design, develop and implement a pilot curriculum mapping tool that could, as part of program and course design or revision, routinely capture the data necessary to review and revise curriculums in general, and in particular the design of assessment requirements. With financial support for the development of this tool from the DVCA through strategic LTPF funds, a prototype that had been developed in the Faculty of Engineering for similar purposes was further developed to meet this need.

Leadership of assessment Addressing the university’s capacity for expert, scholarly leadership for the development of assessment was a principal concern for the DLT. Because the university’s focus for much of the eight years prior to the Assessment Project had been on developing research capability, relatively little attention had been given to the development of leaders of learning and teaching. This is not to suggest that leadership development had not been offered or available to senior staff or those aspiring to leadership roles within the university. Indeed, a wide range of leadership development programs had been available to staff of the university through the human resources office for a number of years. However, the focuses of these programs had typically been on developing participants’ knowledge, skills and capabilities for management (i.e. how to plan, budget, organise, deploy, coordinate, monitor and evaluate staff and resources), rather than on leadership

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 59

59

11/11/13 3:35 PM

(how to develop a vision or aspiration for future development, communicate that vision, motivate, inspire and engage, and secure the commitment of others to bring the vision to reality) (Kotter, 1990). Where these programs had focused on the development of academic leadership (i.e. leadership directed at engaging others in a collaborative, critical process of defining a future for the development of academic programs and practice), they had typically focused on the development of leadership of research and not on leadership of learning and teaching. The problem of a paucity of expert, scholarly leaders of learning and teaching is not unique to UNSW. Indeed, until the last decade, little attention had been paid in Australia at the sector level to the development of such leaders. Essentially, one became a leader of learning and teaching within an institution on the basis of one’s disciplinary expertise and track record as a teacher. However, with the establishment of the Carrick Institute for Learning and Teaching in Higher Education, and with it the establishment of a leadership development strand in the Institute’s grants, awards and fellowship programs, the need for development opportunities in this space has progressively been addressed. Critical to effective leadership of assessment is a deep scholarly understanding of the nature of learning and its implications for teaching, curriculum and assessment design. However, knowledge of the processes of assessment (how it is designed, developed, implemented and evaluated) as well as of the contexts for assessment (organisational, administrative and technological) is also important if individuals are to be able to effectively lead their colleagues in the critical review and evaluation of current assessment practices and the definition of new, efficient and effective assessment processes. To address the challenge of providing expert, scholarly leadership in the review and revision of assessment practices, the DLT employed an experienced and well-respected academic developer with expertise and track record in influencing change in assessment

60 PART I

ImprovingAssessmentText2Proof.indd 60

11/11/13 3:35 PM

practices within higher education institutions as an external consultant. While employed by the central LTU, the role of the external consultant was essentially to assist faculty, school or program leaders and staff to respond to the challenges of the Assessment Project by: ʶʶ developing, implementing and evaluating strategies, criteria and standards for reviewing current assessment practices ʶʶ raising their awareness of the principles and approaches that underpin high quality, efficient and effective assessment practices ʶʶ providing advice on the redesign of their existing assessment practices and on the strategies that could be used to monitor and evaluate the effectiveness of these new approaches in terms of improving the efficiency and/or effectiveness of assessment. Further, the DLT worked with the staff of the LTU, representatives of the heads of school, the ADEs and human resources to: ʶʶ develop and deliver a series of forums on leadership of learning and teaching (in particular assessment) for the HOS group ʶʶ develop and deliver an academic leadership retreat focusing on assessment for the Faculty Learning and Teaching Fellows group ʶʶ review and revise the structure of institutional and faculty awards for teaching excellence to ensure that evidence of leadership of learning and teaching (in particular in assessment) was included in the selection criteria for these awards ʶʶ include discussion of how to articulate and provide evidence of leadership of learning and teaching (in particular evidence of leadership in the development of assessment) in performance review and development discussions, and promotion and award applications.

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 61

61

11/11/13 3:35 PM

Facilitation of assessment Technology is playing an increasing role in assessment of student learning in higher education. The educational technology landscape is burgeoning with tools to enable and facilitate different types of assessment. UNSW has been at the forefront in their development and integration into program and course level assessment processes. Throughout the last decade, UNSW staff have led or been directly involved in the design, development, implementation, evaluation, review or use of a variety of different assessment tools aimed at supporting student learning and making the processes of assessment and feedback more efficient. In the main, these initiatives have been undertaken by individuals or small groups within a program, school or faculty, supported by one or more small research and/or development grants. In most but certainly not all cases, the benefits of this work have accrued to the particular course, program, school or faculty in which the development was undertaken. Only a small amount of this work and its benefits has been disseminated throughout the university. A particular exception to this pattern was the development, piloting and implementation of the Adaptive eLearning Platform (AeLP), developed by a PhD student in engineering and adopted by academics in many UNSW faculties (including engineering, science, medicine and arts) as the platform of choice on which to develop a range of ‘smart’ tutorials and formative and summative assessment tools. To address this issue, the DLT, with the approval of the DVCA, deployed some of the resources at his disposal within the LTU to the development and implementation of a Research Evaluation and Development Framework (the RED Framework) and a Research Evaluation and Development Platform (the RED Platform) for Technology Enabled Learning and Teaching. The RED framework was designed to provide staff with advice, guidance, tools and support to undertake the research, evaluation and development work necessary to develop and assess the potential of new technologies

62 PART I

ImprovingAssessmentText2Proof.indd 62

11/11/13 3:35 PM

to enable and support learning and teaching in general and assessment of learning in particular. The RED Platform provided a limited IT development platform on which staff could develop, trial and evaluate new applications and systems to support student learning and assessment. The development of the RED Framework and Platform provided the university with an ongoing capacity to systematically review and share information on the appropriateness and affordability of different educational technologies to support student learning. It played an important immediate role in the Assessment Project by providing the opportunity and support necessary to review the variety of different assessment tools that were of interest to faculties in their pursuit of better assessment practices. Further, the outcomes of these reviews provided much of the essential data necessary to prepare the business cases for institutional investment in upgrading the university’s IT infrastructure for learning and teaching, to ensure that it has the capacity to leverage these tools in its efforts to reform student assessment and manage staff workload. As McDonnell and Elmore (1987) have observed, while capacity-building measures carry with them the expectations of future returns that are often uncertain, intangible, immeasurable and distant, inherent in some, such as those outlined above, is the potential for more immediate products or returns. These more immediate outcomes are important considerations in capacity-building measures because they serve as proxies for the longer term effects of these measures. They have the potential to satisfy stakeholders that the investment that has been made has been worthwhile and is producing results.

Administration of assessment Beyond the design of an assessment program or task, central to efficient and effective assessment practice are the administrative processes and systems that enable it. Over time, most faculties and

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 63

63

11/11/13 3:35 PM

schools have developed processes and procedures to address administrative challenges associated with: ʶʶ assessment design and development (including the moderation of assessment tasks) ʶʶ implementation of a program of assessment ʶʶ marking and grading students’ work (including the moderation of marks and grades) ʶʶ providing students with feedback ʶʶ documenting and recording the outcomes of assessment ʶʶ dealing with appeals in relation to marks or grades. Many of these are paper-based processes that are very labour intensive for the teachers, students and administrative staff involved. Some involve considerable duplication of effort, due to incompatibilities of the IT applications and systems on which they have been developed and the university’s chosen systems of record. For example, lack of integration between the university’s Student and Academic Administration Systems (SAAS), its Learning Management Systems (LMSs) Blackboard and Moodle and many of the assessment tools used by staff and students has meant that the data routinely captured by these tools as part of the assessment process, including marks and grades, cannot be automatically uploaded from the tool, or the grade book within the LMS, into the SAAS. To address this issue, the DLT, with the support of the DVCA, determined that building the capacity to integrate the university’s chosen LMS and assessment tools into its SAAS needed to be included in the institution’s future IT investment strategy and into phase two of its current project to upgrade the university’s SAAS. Systems changing

Part of repositioning the university to enable it to adopt and maintain sustainable, efficient and effective approaches to assessment involved the building of institutional capacity. However, investment in building staff capability and in the development of the

64 PART I

ImprovingAssessmentText2Proof.indd 64

11/11/13 3:35 PM

organisational, administrative and technical infrastructures necessary to enable and support efficient and effective assessment, was deemed by the DLT to be impossible without substantial changes to the distribution of authority and ways in which decisions were made regarding investment in the development of learning and teaching. As McDonnell and Elmore have observed, ‘existing institutions, working under existing incentives cannot produce [the] results that policymakers want ... [but by] altering the distribution of authority among institutions [or individuals and] by broadening or narrowing the type of institutions [or individuals] that participate in decision-making ... [policymakers can] significantly influence what is achieved, and the efficiency with which it is achieved’ (1987, p.143). As described earlier in this chapter, much of the work that the university community wished to do to improve efficiency and effectiveness of assessment involved the deployment of educational and other technologies. Critical to the success of these deployments, would be the integration of these technologies on a robust and reliable IT backbone. However, the cost of deploying and integrating these technologies would be substantial and investment in them was deemed unlikely to occur quickly under the university’s current IT investment decision-making arrangements. Further, given the nature of the university’s longstanding IT investment decision making process (ITIP), it was highly likely that the investment decisions made would be tactical rather than strategic, leaving little opportunity for the coherent development of the IT infrastructure required to support the institution’s plans to improve the efficiency and effectiveness of student assessment. Fortunately, the VC’s decision to address increasing academic workloads by improving the efficiency and effectiveness of assessment was shortly followed by a decision of the Vice President Finance and Operations (VPFO) to improve institutional governance of IT and the processes by which IT investment decisions

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 65

65

11/11/13 3:35 PM

were made. In essence, the VPFO wanted to develop and implement a model of IT governance that: ʶʶ reflected a whole-of-university approach to the planning, development and evaluation of IT systems, services and applications ʶʶ was aimed at enabling and supporting the core business activities and priorities of the university’s faculties and divisions ʶʶ provided a mechanism to ensure integration and interoperation of IT applications ʶʶ facilitated broad, long-term planning of IT applications ʶʶ supported transparent planning and resource allocation processes for IT, as well as reducing costs in duplication and complexity. (UNSW IT Committee, Business Domain Governance Framework, September 2011.) Part of implementing this model was the establishment of six different business domains (academic, research, foundation, finance and operations, university services, cross enterprise). Representatives of each business domain sat on a newly constituted IT Committee, which had as one of its responsibilities the making of annual recommendations to the university executive on the institution’s priorities for IT investment each year. The IT Committee determined these priorities on the advice of the Business Domain Owners’ Advisory Group, which was a subset of the members of the IT Committee, and was comprised of representatives of each of the business domains. After much discussion about the need for significant investment in the development of the university’s IT infrastructure for learning and teaching, the DVCA invited the DLT to chair the academic business domain’s IT Strategy Committee, and this was seen by the DLT as an opportunity to improve the ways in which IT investment decisions to support learning and teaching had been made in the past.

66 PART I

ImprovingAssessmentText2Proof.indd 66

11/11/13 3:35 PM

To this end, the DLT, with the support of the DVCA, established an IT governance structure within the academic domain that was similar to that established at the institutional level. The terms of reference for the Academic Domain IT Strategy Committee were the same as those established for the IT Committee. Its membership included representatives from each faculty and division, and the business systems owners of the IT systems used within the academic domain, including those responsible for the SAAS, the LMS and the educational media systems (EMS). A number of academic domain advisory groups were established, including those with interests in the development of the SAAS, the LMS, the EMS and the assessment tools that needed to be integrated into these systems. To ensure widespread consultation and input, each of these committees was comprised of representatives from each faculty and division. As the strategic issues facing the university community in relation to learning and teaching were identified and strategies developed to address them, they were sent to faculties and divisions for their feedback, along with proposals for the development of the IT systems and applications necessary to support the strategies. Once particular strategies and systems had been determined to address the issues, the Academic Domain IT Strategy Committee, representing each faculty and division, determined a list of prioritised investments. This list was included with the priorities identified by the other business domains in an institution-wide prioritisation process that resulted in the upgrading of the LMS, the replacement of the EMS, and the identification of appropriate tools to support assessment and their integration into each of these systems, being ranked equal first in terms of institutional priority for investment. By taking advantage of the opportunity to change the governance and decision-making arrangements within the academic domain, and by making the process of consultation transparent and more inclusive of all stakeholders, the DVCA and DLT were able to overcome past political machinations within and beyond

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 67

67

11/11/13 3:35 PM

the domain that had led to significant underinvestment in the IT systems necessary to support effective and efficient assessment.

The process of change In addition to having to consider the nature of the policy instruments that would be necessary to enable the university community to achieve the Assessment Project goals, the DVCA and DLT needed to determine when and how to deploy them to ensure that they were effective in stimulating or facilitating the desired changes. The DLT advocated a three-phase approach to managing the change process at UNSW, based on Fullan’s observation that ‘most researchers now see three broad phases to the change process’: (initiation, implementation and institutionalisation (2003, p. 50). The first of these, the initiation phase, broadly focused on: ʶʶ establishing the need for change in current assessment practices ʶʶ determining the nature of the changes that needed to be made ʶʶ engaging the university community in the proposed change process. A variety of strategies and policy instruments were utilised by a range of individuals at different levels of the organisation to achieve these outcomes. How this was done at the faculty and program levels is reported in the following chapters. At the institutional level, as described earlier: ʶʶ The DVCA and the DLT each played a key role in establishing that academic workload could be reduced by improving the efficiency and effectiveness of assessment practice. ʶʶ The President and VC played an essential role in establishing ‘improvement in efficiency and effectiveness of student assessment’ as a key goal for the university and as one of the KPTs for each of the senior academic managers of the

68 PART I

ImprovingAssessmentText2Proof.indd 68

11/11/13 3:35 PM

university, including the DVCA and Deans. ʶʶ The DVCA and DLT established ‘improvement in efficiency and effectiveness of student assessment’ as a key priority in the university’s Learning and Teaching Enhancement Plan for 2010 to 2012, aligned the availability of strategic funding to support learning and teaching development initiatives to this priority, and made reporting of local strategies and achievements in relation to this priority a requirement in the annual FRLT process. Further: ʶʶ The Institutional Analysis and Reporting Office prepared reports for each faculty based on student feedback data collected through CATEI and CEQ surveys, to show that there was considerable room for improvement in assessing students’ learning and providing them with useful feedback. ʶʶ Our external expert consultant in assessment prepared a discussion paper on effective and efficient assessment design that was used to support faculties, schools and individuals in the critical reflection necessary to identify strengths and weaknesses in their current assessment practices, and opportunities for improvement. ʶʶ An assessment audit tool was developed to enable faculties, schools and staff to map their assessment practices and thus produce the data necessary to critically evaluate that practice against the goals established for the Assessment Project. During the implementation phase, much of the activity associated with the project occurred within local settings (i.e. within faculties, schools and programs). In most cases, this involved: ʶʶ determining the organisational arrangements by which the faculty, school or program would address the challenge of improving the efficiency and effectiveness of student assessment

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 69

69

11/11/13 3:35 PM

ʶʶ identifying and engaging the individuals and groups who would lead or be involved in the local change process ʶʶ determining what the ‘local’ approach to improving assessment practice would be ʶʶ identifying and deploying the local infrastructure and resources necessary to effect the desired changes. At the institutional level, a number of critical enabling actions were undertaken to support the implementation of improved assessment practices throughout the university. As described earlier, these included: ʶʶ the provision of strategic funding to support initiatives aligned with the Assessment Project’s goals ʶʶ the dissemination of scholarship, data and other information regarding current and alternative approaches to assessment ʶʶ the building of institutional capacity to lead, manage and effect the required change in assessment practice. Of particular import among these institution-level implementation support initiatives was the employment of the external expert on assessment mentioned earlier whose role it was to help the LTU support faculties, schools and program staff in documenting, reviewing and revising their assessment practices. The availability of such an individual was important for two reasons. First, in the experience of the DLT, it is easier for an ‘outsider’ to ask the challenging and critical questions that need to be addressed as part of a critical self-evaluation process. Faculty and/or school staff are more likely to be open to engaging with challenges raised when the challenger has no direct or indirect interest in the answers to their questions, than when they are raised by a member of the faculty’s or institution’s own staff, who may be perceived, rightly or wrongly, as having a particular agenda. Such an expert was deemed to be able to provide the arm’s length advice that faculties, schools or program teams would need to support both their

70 PART I

ImprovingAssessmentText2Proof.indd 70

11/11/13 3:35 PM

review and the further development of their assessment practices. The second reason was that an external expert could assist the DLT to mentor and support the development of a relatively inexperienced group of academic and educational developers in the LTU whose job it would be to help faculties to determine and implement ways of improving student assessment. While highly knowledgeable and skilled in their respective areas of expertise, few staff in the LTU at the outset of the Assessment Project had the requisite knowledge, skills and experience to provide the level of leadership and support that the project would require. Restructuring of, and amalgamations between, central support units for learning and teaching during the previous five years had resulted in a combination of experience and expertise among the staff that no longer matched the role expected of them. The third of the planned phases of the UNSW Assessment Project focused on institutionalising the university’s capacity to design, develop, implement and evaluate the efficiency and effectiveness of its assessment practices. The DLT believed that to do this, a number of systems-changing strategies would need to be employed. In addition to those described earlier, these included: ʶʶ placing greater value on the ‘non performative’ aspects of teaching (i.e. the leadership and management of learning and teaching, and the development and review of curriculums and assessment), in selection criteria, workload models, performance review and development processes, and the criteria for promotion and teaching awards ʶʶ reviewing and revising the position descriptions of those with formal leadership and management responsibilities related to learning and teaching (e.g. Deans, ADEs, HOS) to ensure that quality assurance and improvement of assessment were key aspects of their responsibilities ʶʶ reviewing and, where necessary, revising organisational arrangements for the governance, management, development and support of assessment

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 71

71

11/11/13 3:35 PM

ʶʶ developing and implementing a policy and procedural framework for assessment that aligned with the university’s desire for efficient and effective assessment practices. While UNSW’s approach to effecting its desired changes to assessment can be described in terms of these three broad phases of activity, neither the DVCA nor the DLT expected the change process to be as linear and sequential as this description suggests. Indeed, at the outset of the project, it was impossible to know or to predict the myriad factors and circumstances that would influence how and when different elements of the proposed strategy might be needed, or indeed possible to implement. The DVCA and DLT therefore met on a regular basis with the ADEs and other key stakeholders from throughout the university with a view to discussing faculties’ progress in realising their own and the university’s goals for improving assessment, identifying issues of common concern, and developing solutions to address these issues, to ensure that the project could continued to move forward. This chapter has described the nature of and rationale behind the university’s approach to improving the efficiency and effectiveness of assessment. The next nine chapters tell the stories of what was done and what was achieved in each faculty and at UNSW Canberra in response to this challenge.

References Becher, T & Trowler, PR (2001) Academic Tribes and Territories: Intellectual enquiry and the culture of disciplines, 2nd edn, SRHE–Open University Press, Buckingham, UK. Bolman, LG & Deal, TE (2003) Reframing Organisations: Artistry, choice and leadership, 3rd edn, Jossey-Bass, San Francisco. Firestone, W & Corbett, HD (1988) ‘Planned Organisational Change’, in N Boyan (ed.), Handbook of Research on Educational Administration, Longman, New York, 321–40. Fullan, MG (2003) The New Meaning of Educational Change, 3rd edn, Teachers College Press, New York. Kotter, JP (1990) A Force for Change: How leadership differs from management, The Free Press, New York.

72 PART I

ImprovingAssessmentText2Proof.indd 72

11/11/13 3:35 PM

Marshall, SJ, Orrell, J, Cameron, A, Bosanquet, A & Thomas, S (2011) ‘Leading and Managing Learning and Teaching in Higher Education’, Higher Education Research & Development, 30(2): 87–103. McDonnell, LM & Elmore, RF (1987) ‘Getting the Job Done: Alternative policy instruments’, Educational Evaluation and Policy Analysis, 9(2): 33–152. Senge, P (1990) The Fifth Discipline: The art and practice of the learning organization, Currency Doubleday, New York. Smyth, J (1986) Reflection-in-action, Deakin University Press, Geelong.

The UNSW approach to improving assessment

ImprovingAssessmentText2Proof.indd 73

73

11/11/13 3:35 PM

ImprovingAssessmentText2Proof.indd 74

11/11/13 3:35 PM

Part II

ImprovingAssessmentText2Proof.indd 75

11/11/13 3:35 PM

4

The Faculty of Arts and Social Sciences: A whole-of-faculty assessment tool Sean Brawley Assessment is the senior partner in learning and teaching. Get it wrong and the rest collapses. (Biggs and Tang, 2007) [F]ew topics create such divided opinions and raise such passions as assessment and yet, in higher education, we still seem relatively bad at it. (Fry, Ketteridge and Marshall, 2008)

In the last six years the Faculty of Arts and Social Sciences at UNSW has sought to implement change in teaching and learning through whole-of-faculty systems approaches that deliver both quality assurance (QA) and quality improvement (QI) (Peterson, 1991; Hargreaves and Shirley 2009). Where possible, the faculty has found online and/or automated processes, thereby delivering administrative time and resource savings and efficiencies and ensuring data can be warehoused and is easily assessable for compliance or audit purposes. Following this overarching methodology, we embarked on a project to design an online assessment tool that would meet the university’s required efficiency gain and at the same time provide a QA process and a QI opportunity. The Faculty of Arts and Social Sciences (FASS) is made up of

76

ImprovingAssessmentText2Proof.indd 76

11/11/13 3:35 PM

four schools (The Arts and Media, Education, Humanities and Languages, and Social Sciences) and contains 285 academic and 196 professional and technical staff in full-time or permanent part-time roles, plus sessional staff in a variety of teaching roles. In 2013 the faculty has 6993 students enrolled in its 38 undergraduate degree programs in either single or dual degree mode. Further, hundreds of students from outside the faculty undertake a range of courses across its streams and programs as part of UNSW’s long-standing General Education program. At the postgraduate level, 1082 students are enrolled in the faculty’s 28 coursework programs. At the core of the faculty’s undergraduate programs has been the Bachelor of Arts (BA). In 2006 the degree underwent a major external review that brought a number of changes to the program and, as a consequence, the faculty’s undergraduate structure. The implementation of the new UNSW BA also provided the faculty with an opportunity to introduce a new approach to quality assurance and quality improvement, informed by best practice and the scholarship of teaching and learning (SOTL). UNSW, for example, was the first arts and social sciences faculty in the country to require major streams to define specific graduate attributes, which were then mapped against course and/or unit learning outcomes and program graduate attributes. To facilitate this work, an online graduate attributes tool was designed and introduced to the governance process for course approvals or revisions. Much of this early work around the BA informed the faculty’s approach to the university’s Assessment Project.

Phase 1: Benchmarking and stocktaking The first step in the FASS project was an audit of current assessment practice in each of the faculty’s five schools. This exercise also provided an opportunity for schools to consider what best practice in assessment looked like in their discipline area(s). This work focused on the quality dimension of the project and was aimed at

The Faculty of Arts and Social Sciences 77

ImprovingAssessmentText2Proof.indd 77

11/11/13 3:35 PM

assisting individual schools and the faculty to identify pertinent issues through the analysis of meaningful data. The stocktake and benchmarking also had a proselytising function in alerting academic staff to the faculty’s work on assessment. FASS had already identified assessment as an area for further development in its faculty Learning and Teaching Enhancement Plan (2008–2012). The faculty, therefore, had already assigned resources for such work and so was able to commence work on the project in late 2010 before the larger external resourcing of the formal project commenced in 2011. Each school was allocated initial seed funding to carry out benchmarking and best practice exercises. The aims were to: ʶʶ record the amount and type of assessment in specific disciplines among the faculty’s national and international comparators ʶʶ gather examples of best practice in assessment within disciplinary contexts drawn from the SOTL literature, so that disciplines could consider such approaches as they reexamined their approach to assessment. Figure 4.1 compares assessment tasks at UNSW with those from four other universities. It provides an example of some benchmarking work from the then School of History and Philosophy UNSW assessments tasks in history and philosophy (HP) against those of the University of Sydney, University of Melbourne, University of Nottingham, George Mason University and UCLA combined (%). The faculty also appointed a research assistant to work with schools (except the School of Education which had conducted an assessment review the year before) to begin to collect data on current assessment practices. This data was then supplied to the university’s external consultant (Professor Jan Orrell) who, with the assistance of project officer appointed by the Learning and Teaching Unit (LTU), produced stocktaking reports in early 2011. Informed by the broad scholarly literature, these reports included a

78 Part II

ImprovingAssessmentText2Proof.indd 78

11/11/13 3:35 PM

Figure 4.1 Assessment tasks at UNSW compared with those from four other universities. An example of some benchmarking work from the then School of History and Philosophy UNSW HP assessments tasks against those of the University of Sydney, University of Melbourne, University of Nottingham, George Mason University and UCLA combined (%). 50 45 40 35 UNSW

30

Others

25 20 15 10 5 0

Extended writing

Short writing

Exams

Tutorial participation

Individual presentation

Group work

Tests/quizzes

range of recommendations for improving existing assessment practices in each school. These reports received mixed reactions from the schools. One school found its report very useful. The other three schools involved in this aspect of the process found the reports either too challenging (asking tough questions and revealing uncomfortable truths) or of generally limited value when it came to their perceptions of assessment practice in their areas of study. It was certainly the case that staff in some disciplines resented ‘outsiders’ (the external consultant and the central project staff) making judgments about the nature and value of their assessment practices. Because of the large workload and absence from campus of the consultant, relationship management and dealing with stakeholders was often left to the central project staff. Their knowledge and abilities around assessment were, unfortunately, not sufficiently developed to address the disquiet the reports generated in some schools and fuelled a general

The Faculty of Arts and Social Sciences 79

ImprovingAssessmentText2Proof.indd 79

11/11/13 3:35 PM

Figure 4.2 Data from the report completed for the then School of English, Media and Performing Arts (now School of the Arts and Media). Assessment tasks in English, Media and Performing Arts.

Logbook/ reflective journal 3%

Test 5%

Exam 9%

Performance assessment 1 15%

Presentation/ practical task 1 21%

Writing task 20%

Research essay 24%

Tutorial participation 3%

lack of confidence in the veracity of the reporting. In one case the report was virtually discarded and the school undertook its own work. In the two other cases further consultation and discussion saw revisions made to the initial reports. It would be a fair observation that while aspects of phase 1 engaged many staff around the assessment issue, the externally generated reports were counter-productive in the first instance, and resulted in an alienation from the project for many academic staff that was very difficult to reverse. Data such as that in Figure 4.2 was, however, non-controversial and objective. The work across the schools of the faculty did show a diverse range of approaches to assessment in operation. Inequities in the assessment load of students were clearly apparent, not only across schools but within the same streams and programs. It became very

80 Part II

ImprovingAssessmentText2Proof.indd 80

11/11/13 3:35 PM

clear that finding some means of parity across the faculty’s courses would need to be an important dimension of the Assessment Project.

Phase 2: Finding the efficiency metric To consider how the faculty might secure an efficiency saving, the faculty’s Director of Learning and Teaching completed a report that examined the various ways in which student effort in assessment can be gauged. The following approaches were considered: ʶʶ word length and equivalences ʶʶ student time-on-task ʶʶ number of assessment items. Each approach had its advantages and disadvantages but it was resolved that students’ time-on-task would be the most effective way to both measure and deliver an efficiency saving in assessment (see also Fielding, 2008). The first and third options were considered too blunt as instruments for delivering efficiency. Student time-on-task was considered an approach easier to regulate across an enterprise. Further, this approach acknowledged the importance of the assessment system in rewarding students for their efforts in the course. Finally, it was felt that a time-on-task approach for staff marking would be the most effective metric through which an efficiency gain could also be found in this area of assessment. Such an approach was a conceptually difficult undertaking because of the lack of baseline data, measurable metrics, or even expectations around the amount of time students actually spend completing assessment and the time staff take to administer and grade such work (for further discussion of these issues see the likes of Gibbs and Simpson, 2004–05). Initial explorations found little consistency across programs or courses offered by the faculty. The faculty found two touchstones that helped it to set its parameters. The first was the expected student time commitment to

The Faculty of Arts and Social Sciences 81

ImprovingAssessmentText2Proof.indd 81

11/11/13 3:35 PM

their studies as mandated by the UNSW Academic Board. UNSW Academic Board policy holds that for every unit of credit (UOC) completed, 25 hours of effort by the student is expected. Including class attendance and other activities, therefore, the academic board policy expected students to complete a nominal 150 hours of work towards the successful completion of a standard six UOC course (full-time enrolled students completing four courses a semester). Removing from this figure the 36 hours of class attendance usually expected in FASS courses (languages being one exception) left the expectation that students complete 114 hours of work outside their in-class participation for each course. In discussing an efficiency saving and in nominating a figure that would be meaningful and sustainable, the faculty had considered a 25 per cent reduction of student time-on-task as the goal. The 114-hour figure therefore was reduced to 85 hours to secure the pre-determined efficiency saving of 25 per cent. Accepting the educational argument that student effort in a course should, where possible, be explicitly aligned with formative and summative assessment tasks (Brown, 1981; Higgins, Hartley and Skelton, 2002; Gibbs and Simpson, 2004–05), 85 hours was then set as the assessment value – the amount of time the faculty expected students to be working on assessment tasks in any given course. The unit of measure for the new tool would be student time-on-task expressed in hours. A 25 per cent reduction in student time-on-task does not necessarily equate to a similar sized reduction in staff time spent on marking. Staff marking time was examined independently. Staff time-on-task with assessment grading and administration was more difficult to gauge because it was not something that is measured in the day-to-day activities of the full-time continuing teaching staff employed by the faculty. The existing faculty workload formula for example did not set an expectation on how much time a staff member was expected to spend marking the work of each individual student in a course. Only one school had set this expectation

82 Part II

ImprovingAssessmentText2Proof.indd 82

11/11/13 3:35 PM

and that guideline saw staff expected to spend no more than half an hour of grading per student per course. Allied to this issue was the question of what a school expected a teacher to complete in terms of grading within a specified timeframe. This expectation was only expressed in the casual pay marking rates for sessional staff. The project found two different systems in operation in the faculty. One set an expectation that a sessional staff member would mark 4500 words in an hour. The other set the word limit at 6000 words an hour. An important consideration in staff marking time was the issue of feedback. A reduction in staff time that produced a commensurate reduction in the level of feedback to students was unacceptable.

Phase 3: The faculty assessment working party At the commencement of the project a faculty assessment working party had been established to assist the Associate Dean (Education) and the Director of Teaching and Learning with the project. The working party was chaired by the Director of Teaching and Learning and contained one Head of School, two Deputy Heads and two academics with significant experience in assessment. As well as their advisory and oversight role, once the approach to efficiency had been finalised the main task for the working party was to begin to provide the metrics that would inform the tool. This involved calculating a student time-on-task hour value to the myriad of assessment types that existed in their individual schools. Calculating how long a student might take to complete a specific assessment task was not a precise science and involved members of the working party consulting with colleagues and the extant literature (see for example Guillaume and Khachikian, 2011) to reach specific hour values. A way to verify and then endorse these calculations had also to be found. Student diaries as a way to measure time-on-task was considered as a way forward. In S2, 2011 a

The Faculty of Arts and Social Sciences 83

ImprovingAssessmentText2Proof.indd 83

11/11/13 3:35 PM

small trial project was conducted. Rather than providing meaningful data, the purpose of the trial was to explore the efficacy of student diaries as a way to capture student time-on-task as a form of verification. If proof of concept resulted, a larger diary collection exercise would be established to confirm the efficacy of the actual staff time-on-task calculations. While not without its difficulties, the trial did show that student time-on-task diaries could be a way to verify the working party’s estimations.

Phase 4: Finding quality: TESTA As work on the efficiency dimension of the project got underway under the leadership of the FASS director of teaching and learning, the ADE was leading work around quality improvement. As well as the individual school best practice and benchmarking exercises, the existing UNSW course and teaching evaluation and improvement (CATEI) and the nationally administered end of program course evaluation questionnaire (CEQ) data was examined with regard to student opinion regarding assessment in FASS. As well as the examination of this qualitative data, further analysis was provided through the adoption of a manual analysis approach inspired by Scott’s CEQuery (Scott, 2005), providing quantitative meaning for the qualitative data. This data was useful but neither as nuanced nor sufficiently detailed from a stream- or program-specific perspective to make a useful contribution to the project. The opportunity to do something more meaningful in this space, however, came in early 2011 when FASS was invited to become the first non-British university faculty to participate in UK Higher Education Academy–funded Transforming the Experience of Students Through Assessment (TESTA) project, led by noted British higher education scholar Professor Graham Gibbs (Jessop, El Hakim and Gibbs, 2011). The invitation appeared to be the perfect opportunity to buttress the quality dimension of the faculty Assessment Project.

84 Part II

ImprovingAssessmentText2Proof.indd 84

11/11/13 3:35 PM

A TESTA project officer was appointed by the faculty to support the project and a member of the TESTA team (Dr Tansy Jessop, University of Winchester) visited UNSW in September 2011 to conduct a two-day workshop on this approach. As well as participants from FASS, delegates from the faculties of science and law, COFA and the LTU also attended. At the centre of the TESTA approach is the assessment experience questionnaire (AEQ). The British form was slightly modified for an Australian audience and FASS was the first institution to design and deliver the questionnaire online. The questionnaire contains 28 questions over the broad themes of quantity of effort and feedback, syllabus coverage, standards and assessment satisfaction. Over 1800 FASS undergraduate and postgraduate students completed the survey. This data was augmented by testimony provided by focus groups involving 60 students. The data from the TESTA project is very rich. A report using the data at a broad faculty level was completed. Headlines from the report were stated as follows: ʶʶ Assessments need to be treated by staff and students as integral to the entire teaching and learning process. Students should be encouraged to undertake weekly assessment tasks that contribute to a larger future assessment task. ʶʶ Assessments, particularly in social science subjects, need to embrace a broader range of learnt concepts in assessment tasks and more clearly articulate the intersections between the learnt concepts and the assessment environment. ʶʶ Feedback methods and delivery require reconceptualisation in order to reach the one in five students who undervalue feedback. ʶʶ It is essential that timely and effective feedback be given, especially in first year to set an example for future studies. ʶʶ Feedback that is consistent, timely and informative and relates to future assessments needs to be strongly incorporated into teaching and assessment pedagogy.

The Faculty of Arts and Social Sciences 85

ImprovingAssessmentText2Proof.indd 85

11/11/13 3:35 PM

ʶʶ While most students are clear about expectations in relation to the learning and assessment environment, and generally praise academics that are passionate about their subjects, there is evidence to suggest the need for a consistent set of expectation guidelines that include items associated with academic conventions such as referencing. ʶʶ Clearer communication of academic expectations is required for students to better understand what is expected of them. ʶʶ Better communication of expectations in relation to assessments, with more detailed assessment guidelines, is required. ʶʶ Return dates of assessments need to be prioritised in order for timely feed-forward to be provided to students and normalised across the school in teaching protocols. ʶʶ Methods need to reinforce the connections between assessments and course aims and outcomes. Exams in social sciences must relate to the student experience. ʶʶ Feedback is overwhelmingly the greatest issue of contention and of concern to students. As discussed above, students require clear, timely and consistent feedback in order to achieve continuous improvement in their studies. It is important across all levels and all fields of academic study. ʶʶ An area of great concern is methods of assessment; a method of examination for a journalism student is not necessarily an adequate or appropriate method of examination for a politics or creative arts student. Examination methods need to be consistent, with examining student knowledge of key concepts and course material in line with expected outcomes. Figures 4.3 and 4.4 show some of the student responses. The data is capable of being further mined to program or stream or by undergraduate or postgraduate level or even by gender.

86 Part II

ImprovingAssessmentText2Proof.indd 86

11/11/13 3:35 PM

Figure 4.3 Example of a chart generated from the faculty assessment experience questionnaire I learnt new things while preparing for the exams/class tests 0%

5%

10%

15%

20%

25%

30%

35%

40%

45%

Strongly disagree Disagree Neutral Agree Strongly agree

Figure 4.4 Focus group data – number and types of comments captured 250 200 150 100 Positive 50

Neutral

0 ey

y Va

lu e

fo

rm

on

lo g no

ch

Te

nd ar

ds

rt d an al s

Cl e

ar

go

Ac

ad

em

ic

st a

su

ss

m

pp o

en

t

y nc of in g

Ti m

ss m se

as se

si on tc

en

ua ct lle

As

te In

st e

ng lle ha

lc

se as of

ou nt

e

t en ss m

tt en

ss m Am

As

se

Fe e

db a

ck

yp e

Negative

Phase 5: The Assessment Tool trial By late 2011 the working party had brought together the data from each school. With this information the work could begin to more seriously consider the tool. A first and major question was whether

The Faculty of Arts and Social Sciences 87

ImprovingAssessmentText2Proof.indd 87

11/11/13 3:35 PM

the tool should capture all the disciplinary diversity identified around assessment tasks or whether, in fact, while some assessment tasks might have had different names they were basically similar in structure and expectation. The working party agreed to consolidate tasks under generic titles that produced a more manageable list of tasks. Through a collaborative process the faculty’s diverse approaches were consolidated into 15 meta-categories. Further, the working party had agreed on nominal time-on-task hour values for each assessment task. The working party then began to consider the default setting and difficulty multipliers that might exist for each specific assessment task. For example, was a 1500 word essay easier for a 300 level student in comparison to a 100 level student? Did greater scaffolding increase or decrease the time spent on an assessment task? With this data, an assessment tool prototype was designed. For the purposes of the exercise a simple Excel spread sheet approach was adopted. The tool included the difficulty modifiers for each nominated task and used a green or red flagging system. A red flag compelled the teacher to re-examine the input. With the prototype designed, the faculty was ready to launch a trial in S1, 2012. Each of the then five schools was asked to nominate three courses to participate in the trial. The nominated 15 courses involved nearly 1000 students and over 50 academic and sessional staff. Each course’s assessments were placed in the tool prototype. The trial saw most of the 15 courses compelled to reduce their assessment, though in two cases the assessment load actually had to be increased to meet the 85 hours required, further highlighting the equity/parity issue. To verify the efficacy of the tool from both a student and staff perspective, 250 of the 1000 students in the 15 trial courses were asked to complete a purpose built, online time-on-task diary. All students who completed the diary would receive a $25 shopping voucher and be placed in the draw to win an iPad 2. One hundred and seventy students signed up to complete the diaries. In one of

88 Part II

ImprovingAssessmentText2Proof.indd 88

11/11/13 3:35 PM

the 15 courses chosen, no student volunteered to keep a diary. The diary was unique to the student’s course and set out the expected assessment tasks. Given the opportunity the diary process provided, the students were not only asked to record the amount of time spent on each assessment item but also how they spent that time. This data will be later used for a project examining how students allocate their time when completing a variety of assessment tasks. To get a better sense of the amount of staff time spent on grading, the 54 academic and sessional staff teaching in the 15 trial courses were also asked to complete an online diary. Among academic staff the results on diary keeping were somewhat disappointing, with a number of full-time continuing academics not completing diaries despite having accepted their course for inclusion in the trial. Sessional staff had been offered shopping vouchers in acknowledgement that there was a small time contribution required in filling out the diaries. At least one sessional staff member, however, refused to complete the diary on the grounds that they were not remunerated and they did not see it to be within the duties of their role. The data produced by the diaries was most enlightening for the project but also required considerable effort in meaning making. Through extensive analysis by the project team, indicators of confidence were found in the data and this was then examined against the existing calculations in the prototype, including the base range and difficulty modifiers. Where revised, the new data would inform the parameters of the tool. With regard to staff time-on-task, the diaries did provide meaningful data that schools could consider when designing their new workload models. Further, the data set a benchmark for assessment against the 85 hours of student time-on-task. If further time savings were to be made, individuals and schools would need to examine other time-saving approaches such as those trialled by other faculties during the Assessment Project. The staff data also revealed a stark difference in marking time

The Faculty of Arts and Social Sciences 89

ImprovingAssessmentText2Proof.indd 89

11/11/13 3:35 PM

between sessional and continuing staff that schools would have to consider. Sessional staff took longer than continuing staff members to mark the same work. In terms of proof of concept, the trial showed the efficacy of the mechanics of the tool. Fifteen courses were able to successfully use the tool with minimal effort. Before full implementation, the efficacy of the metrics had been proven. The data and analysis from the diary process had allowed alterations to individual assessment values to be made. The trial also revealed the need for a new assessment item to be added to the tool around completing or marking an assessment task in a foreign language. Independent of the trial process, another assessment task unique to the study of linguistics (problem sets) was also added.

Phase 6: The tool With proof of concept secured, the online version of the tool was now designed. A number of enhancements from the basic prototype were made. These included: 1 The university’s draft assessment procedures were built into the tool to ensure it could also serve a broader compliance role than simply the reduced hours of student-time-on-task. If the tool could do this verification work, it would deliver an administrative time saving to faculty governance. The tool was designed to only reject a proposal if the assessment tasks equalled less than 70 or more than 100 hours. All other issues, notably those related to the assessment procedures, use a flag system, because the procedures allow faculty education committees to exercise discretion. If a course meets all the university and faculty requirements and falls between 80 and 90 hours it is automatically passed and the faculty education committee is presented with a report containing the details. If the course is flagged for any reason (e.g. it contains a single assessment item greater than 65 per cent) the education

90 Part II

ImprovingAssessmentText2Proof.indd 90

11/11/13 3:35 PM

committee must deliberate. The tool provides the committee with all reporting, with an email copy immediately sent to the proposer on submission. 2 The QI dimension of the tool was now included. This included a range of detailed information around assessment types and the inclusion of the student voice from the TESTA project and CATEI. Each assessment task therefore holds the following information: »» description of the assessment type »» tasks the assessment engages »» skills the assessment engages »» benefits of the assessment type »» issues related to the assessment type »» student voices on the assessment type »» further resources on the assessment type. 3 Quality assurance systems are often criticised for stifling innovation. The FASS tool seeks to address this issue by having a new assessment item proposal function built into the tool. Under the existing system, innovation in assessment was often locked in individual courses or units and so was rarely shared. Under the new system, a staff member who seeks to introduce a new item must complete the proposal (which essentially asks for the information noted above). If approved, the assessment item is introduced to the tool and thereby made available to all. The work involved extensive interaction between the tool design team and the quality team providing the content. These meetings were chaired by the ADE. Another dimension of the content included the production of a user manual and an instructional video produced by the ADE with the assistance of the LTU. The full TESTA report was also placed in the tool.

The Faculty of Arts and Social Sciences 91

ImprovingAssessmentText2Proof.indd 91

11/11/13 3:35 PM

The project team tested the tool for bugs and functionality using the initial 15 courses that had been used in the trial. This process identified a number of issues with the tool and led to significant modifications. Once this work had been completed, 12 faculty members were invited to act as beta testers. The beta testers were simply shown the tool and asked to navigate their way through it without any assistance being provided. The beta process proved most useful in identifying user issues (see Figure 4.5). Figure 4.5  Undertaking beta testing

Phase 7: Implementation Implementation of the new tool required discussion on a number of levels. As well as designing the means by which colleagues would be introduced to the tool and trained in its use, the project team had to consider how it would sit within faculty governance. With regard to governance, the major advantage of the tool was its capability to streamline governance. Under the existing governance process, if a staff member wished to make a change to their assessment suite (e.g. as a result of student feedback), they would need to submit a full course revision to governance no later than September of the

92 Part II

ImprovingAssessmentText2Proof.indd 92

11/11/13 3:35 PM

previous year if they wished it to be approved for teaching in the first semester of the new year. This onerous and inefficient process had helped to produce within the faculty a problem that has been dubbed ‘slippage’. A stocktake of the faculty’s courses found that many had had significant changes made to them by staff without seeking a formal endorsement of the revision through faculty governance, creating a significant QA issue for the faculty. With regard to the assessment suite for example, only 12 per cent of the faculty’s courses still held the suite of assessments originally approved by the faculty. Teaching staff were being responsive to the learning needs of their students but had given up on the time-consuming process of governance approval to make changes. Because the tool endorses the assessment suite, there now existed the potential to dramatically streamline the process and correct the slippage issue in faculty governance. Rather than a full course revision, staff seeking only to refine assessment could use the tool. Discussions with the faculty presiding member responsible for governance saw a number of decisions made regarding governance structures that will now allow a staff member to submit a revision to their assessment suite that meets requirements in the weeks before a course is taught rather than six months before. This governance change will allow staff to be much more responsive and flexible. It was resolved that all courses in the faculty would move to the new assessment approach from S2, 2013. All courses would need to be submitted to the tool and endorsed by governance. In S2 another deadline for S1, 2014 courses would be set. The tool was opened for use in January 2013. Staff could use the help guides in the tool itself or attend one-on-one workshops with members of the project team to complete the work. The first proposals were endorsed by the relevant faculty sub-committees and standing committee in February and March 2013. By May, 302 courses had been submitted to the tool. Two hundred and thirty seven courses were approved by the committee en masse, as they had

The Faculty of Arts and Social Sciences 93

ImprovingAssessmentText2Proof.indd 93

11/11/13 3:35 PM

been endorsed by the tool as meeting the new hourage requirement and the draft university assessment procedures. The tool flagged 65 courses (26 per cent) that would require the human intervention of the Education or Postgraduate Coursework Committee to consider the justifications provided by the academic staff member as to why their suite of assessments did not comply with university policy or the new hourage range. The committee accepted 40 of the justifications provided in the proposals or found individual academics had triggered the flagging by providing a justification when one was not required. Twenty-five courses (8 per cent of the total courses submitted) were referred back to academic staff because either the committee had not accepted their justifications or further clarification was required. The tool’s relationship with the university’s online course approval system (MAPPS) is currently somewhat contrived. At present, proposers of a new course would complete the proposal in MAPPS but not complete Section 7 and instead complete the faculty tool. The FASS Assessment Tool (see Figure 4.6) is scalable and could easily be converted into a whole-of-enterprise tool if so desired. The tool could easily plug into MAPPS, replacing the current Section 7. Figure 4.6  The Arts and Social Sciences Assessment Tool

94 Part II

ImprovingAssessmentText2Proof.indd 94

11/11/13 3:35 PM

Conclusion The Arts and Social Sciences Assessment Tool was a major undertaking by the faculty that has sought to meet the goals of the wider UNSW initiative. While the figures must by their nature (and in the absence of original baseline data) be taken as nominal, the time-savings delivered by the tool could be considered as follows: ʶʶ student time-on-task per course (x) ʶʶ staff time-on-task per course per student (y) ʶʶ administrative/governance time saving per course (z). Per course, this represents a saving of 29 hours (x), 0.33 hours (y) and 0.36 hours (z). With N representing the number of students in any given course, this could be expressed as: Nx + Ny + z = Total institutional time saved per course Further time savings in staff time-on-task may be secured by FASS staff embracing some of the initiatives trialled by other faculties such as Engineering (iUNSW RubriK App) and the Australian School of Business (ReView) during the project. Future discussions around reporting and governance may also deliver a further administrative time saving. With its in-built quality improvement function, the tool also has the potential to enhance the student experience of assessment. Evidence from the beta testing and the early implementation workshops showed that staff engaged with assessment in a different way when using the tool. Features such as the base-time modifier and the detailed assessment explanations have made for more reflective considerations and conversations around assessment. Using quality assurance processes to create quality improvement opportunities, however, is a strategy not without its risks in higher education (for a general discussion see the likes of Jessop, McNab and Gubby, 2012 and Huber and Brawley, 2013). Some feedback from

The Faculty of Arts and Social Sciences 95

ImprovingAssessmentText2Proof.indd 95

11/11/13 3:35 PM

the implementation revealed that the suspicion among some staff that the tool was simply a compliance tool and the ‘just in time’ approach to such obligations often employed by staff saw some people simply enter their existing assessment suites and then try to manipulate the modifier to get the tool’s ‘green light’ rather than using the tool to inform and improve their assessment decisions. It is still too early to draw final conclusions on the success of the tool but the early signs continue to be very positive. The faculty’s approach did deliver the required efficiency saving for students, staff and administrative processes. Further, the quality improvement dimensions of the tool continue to be favourably commented upon by staff. Finally, the faculty has a quality assurance mechanism that both captures faculty practice and helps to address the problem of slippage. An Office of Learning and Teaching grant application is currently being completed with three other institutional partners to explore the scaleability and transportability of the Arts and Social Sciences Tool.

References Biggs, J & Tang, C (2007) Teaching for Quality Learning at University, 3rd edn, SRHE and Open University Press, Buckingham, UK. Brown, FG (1981) Measuring Classroom Achievement, Holt, Rinehart and Winston, New York. Fielding, A (2008) ‘Student Assessment Workloads: A review’, Learning and Teaching in Action, 7(3): 14. Fry, H, Ketteridge, S & Marshall, S (2008) A Handbook for Teaching and Learning in Higher Education: Enhancing academic practice, Routledge, New York, 132. Gibbs, G & Simpson, C (2004–05) ‘Conditions Under Which Assessment Supports Students’ Learning’, Learning and Teaching in Higher Education, 1: 3–31. Guillaume, DW & Khachikian, CS (2011) ‘The Effect of Time-on-task on Student Grades and Grade Expectations’, Assessment & Evaluation in Higher Education, 36(3): 251–61. Hargreaves, A & Shirley, D (2009) The Fourth Way: The inspiring future for educational change, Corwin Press, Thousand Oaks, CA. Higgins, R, Hartley, P & Skelton, A (2002) ‘The Conscientious Consumer: Reconsidering the role of assessment feedback in student learning’, Studies in Higher Education, 27(1): 53–64. Huber, M & Brawley, S (2013) ‘Introduction: Minding – and managing – the gap:

96 Part II

ImprovingAssessmentText2Proof.indd 96

11/11/13 3:35 PM

A forum on assessment, accountability and the humanities’, Arts and Humanities in Higher Education, 12(1): 3–6. Jessop, T, El Hakim, Y & Gibbs, G (2011) ‘TESTA: Research inspiring change’, Educational Developments, 12(4): 12–16. Jessop, T, McNab, N & Gubby, L (2012) ‘Mind the Gap: An analysis of how quality assurance procedures influence programme assessment patterns’, Active Learning in Higher Education, 13(2): 143–54. Peterson, M et al. (1991) Assessing the Organizational and Administrative Context for Teaching and Learning [microform]: An institutional self-study manual, ERIC Clearinghouse, Washington, DC. Scott, G (2005) Accessing the Student Voice: Using CEQuery to identify what retains students and promotes engagement in productive learning in Australian higher education, University of Western Sydney, Sydney.

Acknowledgments The following individuals were involved in the FASS Assessment Project and contributed to its success. ʶʶ Assessment project team: Alexandra Robinson, Belinda Clayton, Carlin de Montfort, Erik Nielsen, Scott Denton, Shawn Ross, Stuart Upton, Sam Russell, Rowland Hilder and Brian Ballsum-Stanton ʶʶ Assessment Working Party: Shirley Scott, Chris Davison, Ed Scheer, James Lee, Laura Shepherd & Shawn Ross ʶʶ LTU: Kate Coleman and Michael Rampe ʶʶ TESTA: Graham Gibbs, Tansy Jessop and Paul Hyland ʶʶ Benchmarking/stocktake authors: Amanda Wilson, Eureka Henrich, Janice Orrell (external consultant), Keri Moore, Laura Shepherd, Shirley Scott and Timothy Allen ʶʶ Prototype (alpha) testers: Adela Sobotkova, Andrew Kapos, Andrew McNicol, Andrew Murphie, Andrew Sankoh, Andrew Shields, Brigid Costello, Brooke Rogers, Carmen Cabot, Caroline Wake, Clare Grant, Fiona Andreallo, Hannah Rabie, Jae Yup Jung, Jennifer Whittle, Jo Coghlan, John Solomon, Laura Shepherd, Leslie Snelgrove, Malcolm Pearse, Margie Borschke, Mariana Zafeirakopoulos, Marie Young, Megan Macadam, Mif Hudson, Muriel Moreno, Nick Doumanis, Richard Hurley, Sanja Milivojevic, Sharon Sanders, Shawn

The Faculty of Arts and Social Sciences 97

ImprovingAssessmentText2Proof.indd 97

11/11/13 3:36 PM

Ross, Stephen McGuinness Tomas Pyke, Valerie CombeGermes, Wan Ng, Yang Mu and Yvette Stern ʶʶ Beta testers: Anthony Billingsley, Chihiro Thomson, Greg Leaney, James Lee, Karyn Lai, Laura Shepherd, Mano Mora, Meg Mumford, Mira Kim, Terry Cumming and Wan Ng ʶʶ And the 1820 students who undertook the AEQ, the 60 who took part in focus groups, the over 1000 whose courses were in the prototype trial and 170 who made diary entries.

98 Part II

ImprovingAssessmentText2Proof.indd 98

11/11/13 3:36 PM

5

The Australian School of Business: Re-thinking assessment strategies Loretta O’Donnell and Prem Ramburuth

The Australian School of Business (ASB) is one of the largest faculties at UNSW and one of the largest business faculties in Australia, with eight disciplinary schools, approximately 13,000 students, over 260 academics and researchers, and 177 professional and technical staff. Many view it as being equivalent to the size of a small university. The ‘massification’ of higher education has meant large classes, especially at the undergraduate level. For example, in the ASB, first year core courses in Accounting, Economics and Management have cohorts of approximately 1600 to 2000 students, with a sizable workload for academic staff in relation to teaching and assessment. In terms of student diversity, around 30 per cent of students are from international backgrounds and around 60 per cent speak a language other than English at home. In terms of staff diversity, academic staff are recruited from a broad spectrum of international backgrounds and bring different approaches to teaching, learning and assessment. The ASB has always placed a high priority on excellence in both teaching and research. Prior to this university-led Assessment Project, the approach to assessment in the ASB (especially in the large numeracy based

99

ImprovingAssessmentText2Proof.indd 99

11/11/13 3:36 PM

schools) tended to be characterised by frequent and lengthy exams, often equated with perceptions of ‘rigour’ in measuring what students have learnt. As well as being unnecessary, this was proving to be costly (e.g. large cohort mid-session exams were held at an off-campus venue that cost approximately $30,000 per exam to hire). It was also costly in terms of academic staff time, as well as administrative time required for processing the exams. Clearly, these approaches were unsustainable. As the faculty and schools grew in size, so did the pressures on the quality of teaching, learning and assessment. Yet another pedagogic change that needed to occur was the shift from viewing exams as ‘assessment of learning’ (certainly easier to manage) to ‘assessment for learning’, with an emphasis on the student learning outcomes. Overall, new and more effective approaches were required to improve both effectiveness and efficiency in assessment on a faculty-wide basis in the Australian School of Business. It was at this juncture that two major events in learning and teaching occurred: ʶʶ the implementation of a university-wide assessment strategy ʶʶ the ASB commenced preparation for accreditation by the Association to Advance Collegiate Schools of Business (AACSB). The decision by the university to implement its Assessment Project was both timely and welcome in the faculties. It provided a strategic opportunity to put into place a faculty-wide initiative to raise greater awareness of assessment practices, to improve efficiency and effectiveness, and to develop staff capability in aligning assessment practices with university principles. The first section of this chapter explores the adoption of an institutional level initiative to improve approaches to assessment in the ASB, highlights the diversity in responses across the schools, and considers their specific disciplinary perspectives and perceptions of quality in assessment. In doing so, it demonstrates the

100 Part II

ImprovingAssessmentText2Proof.indd 100

11/11/13 3:36 PM

challenges faced in seeking to achieve consistency in implementation of the assessment initiative (including size of the faculty, entrenched assessment practices, leadership and capacity for change). Finally, it identifies improvements resulting from the assessment initiative in relation to effectiveness and efficiency (e.g. the elimination of mid-session examinations in some courses), as well as the ongoing challenges. The second section demonstrates how the Assessment Project served as a precursor for the faculty’s approach to assurance of learning (AOL) as a component of international accreditation through the AACSB. It outlines some of the aspects of implementation and the faculty’s achievements to date, particularly in assessing learning outcomes.

Part 1: Implementing the Assessment Project The ASB Assessment Project took a multi-layered approach that comprised several phases: ʶʶ Phase 1: Planning and implementation and audit of undergraduate programs and outcomes (2010) ʶʶ Phase 2: Audit of postgraduate programs and outcomes (2011) ʶʶ Phase 3: Consolidation, and training and development of staff on ASB assessment guidelines (2011). The ASB, under the leadership of the ADE, took the opportunity for a large-scale review and adopted a strategic approach that involved every school in assessment audits, assessment meetings to discuss the audit outcomes, management meetings to address assessment gaps and required changes or innovations, and training for change and improved practice.

The Australian School of Business

ImprovingAssessmentText2Proof.indd 101

101

11/11/13 3:36 PM

Planning and implementation

The ASB planning phase was led by the Associate Dean (Education) (ADE), working in consultation with the Director of Learning and Teaching at UNSW and an external consultant who provided the expertise and external review perspectives required in a project of this nature. The project was supported by an assessment grant of $100,000 (funded by UNSW) and a team of staff trained in supporting learning and teaching activities, especially in the areas of curriculum mapping, curriculum alignment, construction of rubrics and technology – key elements in the planning of sound assessment protocols, practice and tools. The external consultant, working in collaboration with the ADE, devised an overall plan for the ASB. It commenced with an interview between the consultant and each of head of school to develop an understanding of the assessment culture, common practices and specific disciplinary issues in the school. An audit tool was created to enable data collection via an in-depth review of undergraduate level assessment practices in each of the eight disciplinary schools. A data management assistant was appointed who had been trained in teaching and was familiar with assessment as part of the learning–-teaching cycle. The assistant was, therefore, able to collate and interpret assessment data and identify patterns in assessment types, while the consultant was able to make observations in relation to strengths, weaknesses and gaps inherent in the approaches employed across the schools. Each head of school received a comprehensive report on assessment practices within their school and a targeted set of recommendations to facilitate discussion and debate and provide a catalyst for change. The reports included comment on the range of assessment tasks; the type, authenticity and complexity of tasks; the scaffolding across disciplinary courses and levels (from Years 1 to 3 in the undergraduate program); the scheduling and frequency of implementation; the total number of tasks; potential discipline-specific issues; and recommendations for improvement.

102 Part II

ImprovingAssessmentText2Proof.indd 102

11/11/13 3:36 PM

.

The undergraduate level school-based audits were all completed by the end of 2010. The findings were discussed with the heads of school to verify the reports, canvass follow-up strategies for engaging with their staff and commence the conversation of improved efficiency and effectiveness. As a key component of the ongoing change management, staff seminars and workshops were held in each of the schools to generate disciplinary based insights on areas of strength and weakness in assessment practices and approaches. Initially, the planned approach was to address assessment issues separately for undergraduate and postgraduate coursework in each of the eight schools but there was some overlap. Many academics teach across both the undergraduate and postgraduate levels and many of the challenges in implementing assessment tasks are similar (e.g. assessment in large classes at undergraduate or postgraduate level). Consequently, the separate school based meetings were merged and proved to be more effective in terms of discussion and the exploration of issues to be managed and improved in assessment, especially in a large faculty such as the ASB. Audit of the undergraduate programs

Figures 5.1 to 5.4 are examples of the type of information provided to heads of school and academics to provide insights into assessment trends and practices in their schools, with the intention of raising awareness, stimulating discussion and facilitating change. Major differences between the qualitative and the quantitative based schools are clearly evident and sharing of data across the disciplines illuminated this further. Many staff were surprised at the concentration of assessment on examinations, for example, in accounting and economics, with 68 per cent and 65 per cent respectively. The mid-term exams were also a concern as some staff were unable to return marked exams and provide feedback on time given the breadth of the assessment tasks and marking workloads. For many staff in these contexts, assessment in the mode of formal examinations was equated with rigour. In contrast, schools such

The Australian School of Business

ImprovingAssessmentText2Proof.indd 103

103

11/11/13 3:36 PM

Figure 5.1  School of Accounting assessment type distribution for undergraduate courses Tutorial preparation & participation 4%

Group work 7% Authentic task 8%

Weekly quiz/test 12%

Extended writing 4% Mid-term exam 11%

Final exam 54%

Figure 5.2  School of Economics assessment type distribution for undergraduate courses

Group work 3% Authentic tasks 6%

Tutorial preparation & participation 3% Individual presentations 1% Weekly work 9%

Extended writing (individual) 6%

Class test 4%

Mid-term exam 13%

Final exam 55%

as management and marketing had assessment based on exams at the level of approximately 35 per cent and 39 per cent respectively. These schools had higher levels of extended written work (e.g. essays), with implications for time required for marking lengthy pieces of work.

104 Part II

ImprovingAssessmentText2Proof.indd 104

11/11/13 3:36 PM

Figure 5.3  School of Management (Management major) assessment type distribution for core courses Tutorial preparation & participation 10% In-class short answer 5% Group work 15%

Group presentations 10%

Authentic tasks 10%

Mid-term exam 2%

Final exam 33% Extended writing 15%

Figure 5.4  School of Marketing assessment type distribution for undergraduate courses

Group work 10%

Tutorial preparation & participation 5% Individual presentations 1% Weekly work 3%

Group presentations 8%

Mid-term exam 8%

Authentic tasks 28%

Final exam 31%

Extended writing (individual) 7%

The findings in these audits stimulated discussions on assessment in each school and at the faculty level. These discussions were

The Australian School of Business

ImprovingAssessmentText2Proof.indd 105

105

11/11/13 3:36 PM

followed by faculty level workshops and seminars and special presentations by the external consultant, by the Director of Learning and Teaching and other appropriate experts to collective groups such as HOS, program coordinators and first year teachers to influence decision making and facilitate changes to assessment in key areas. Further changes were driven by the assessment requirements of the AACSB accreditation process as discussed further on. In four large first year courses, the School of Economics replaced mid-semester exams with internal assessment tasks. Nevertheless, in this discipline, the culture of examinations as the most rigorous form of assessment remains, with final exams typically three hours in length and academic staff noting that, while marking and grading are time consuming, their colleagues ‘are dedicated to ensuring high quality education and assessment’. In contrast, the School of Accounting reduced the length of half of its core course exams from three to two hours per exam. Their ongoing target is more two-hour exams and further quality improvement. Another efficiency outcome of this analysis was that the School of Actuarial Studies both eliminated mid-session exams in several of its courses in 2011 and reduced all final exams to two hours in 2012. A longer term outcome relating to effectiveness is that the school negotiated with the Institute of Actuaries to reduce the 70 per cent exam requirement in favour of the inclusion of some progressive assessment, in order to focus on the development of broader graduate capabilities. An additional longer term outcome is that the Bachelor of Actuarial Studies is considering including a new compulsory core course, Managing Organisations and People, which specifically develops teamwork, communication and management capabilities, consistent with the broader capabilities now required by stakeholders, including professional associations. Schools have also sought to engage students in authentic learning and assessment activities, such as the case based method. The faculty supported and encouraged the shift to a greater use of experiential approaches in teaching, learning and assessment by

106 Part II

ImprovingAssessmentText2Proof.indd 106

11/11/13 3:36 PM

purchasing a licence for the use of the Harvard Business teaching resources including the high quality cases. The School of Management introduced a team building simulation with a cohort of nearly 2000 students in the first year course in Managing People and Organisations, and other schools have adopted video based case studies and ‘authentic’ practice based materials. Another common practice that was identified in the audits was the allocation of marks for class participation, such as tutorial preparation. Much discussion followed on what the learning outcomes were and what was being assessed. In such instances, the faculty Assessment Project intersected with the AACSB accreditation process that required clear assessment criteria for all assessment tasks. The outcome was a recommendation for a well-constructed rubric that clearly set potential assessment activities and learning outcomes for active student participation in tutorials. This rubric (and other rubrics that were developed for both the Assessment Project and accreditation by a language and learning expert) also served the purpose of ensuring consistency in practice across the schools – a relatively small allocation of marks but, nevertheless, the clarification of a fundamental assessment issue. The Assessment Project has strengthened staff confidence to experiment with innovative approaches to assessment. For example, the School of Taxation and Business Law reported an increasing use of problem based learning activities in both assignments and exams (‘to engage students and foster deep learning’), as well as an increased use of authentic tasks in assessment (‘realistic scenarios and simulations’). The School of Management adopted ‘reflective’ writing pieces to assess students’ team based learning and development in relation to their teamwork simulation, a virtual climb of Mount Everest. There was strong interest in the use of technology to gain efficiency and to increase the quality of feedback in assessment. ReView grading software was trialled in 20 courses in the ASB to assess 3200 students, with visible efficiencies in marking time,

The Australian School of Business

ImprovingAssessmentText2Proof.indd 107

107

11/11/13 3:36 PM

creating an impetus for more widespread adoption in a broader range of courses both within the ASB and potentially within the university. Following the first school feedback meetings, staff took opportunities to participate in follow-up workshops to address specific issues. For example, the School of Accounting requested a tailored workshop on ‘developing multiple choice questions’ to facilitate higher order thinking and develop a database for efficiency in the use of high quality MCQs. The School of Management requested a workshop on ‘assessing group-work’ and its many complexities (especially in an environment of mixed groups of English and non-English speaking students). Some staff also volunteered for individual consultations with the external consultant to help them prepare for innovations in efficiency and in broad aspects of assessment. As a result, the first year, S1 accounting core course (ACCT1501) changed from a lengthy and cumbersome two-hour mid-session exam (total 30 per cent) to a more efficient set of in-class tests, which were short, sharp and purposeful, including a 30-minute online short answer exam (10 per cent), online quizzes (15 per cent) and a self-reflective learning piece (5 per cent). Several second level courses took on these kinds of innovations in the School of Accounting in 2011 and 2012. Some courses, such as the second semester accounting core course (ACCT1511), initially retained the mid-session exam of 1.5 hours (30 per cent), which was reviewed in 2012 and replaced with a series of online quizzes and tutorial tasks. Most heads of school continue to pursue assessment changes for both efficiency and effectiveness. As expected, leadership in the schools has influenced the rate of change. Where there is head of school support for making changes in relation to reducing exam length and frequency, adjustments have already been evident. Where heads of school accepted the tendency of some academics to equate ‘rigour’ with lengthy exams and frequent testing, or the tendency to compress everything that they want to test into the exams, the rate of change has been slower. However, the process is ongoing.

108 Part II

ImprovingAssessmentText2Proof.indd 108

11/11/13 3:36 PM

Audit of the postgraduate programs

In mid-2011, academic staff involved in postgraduate teaching were invited to join the Assessment Project. Postgraduate staff learnt from the undergraduate assessment discussions and workshops, and many participated in their school specific assessment workshops and in interdisciplinary workshops and forums at the faculty level. Targeted consultations were held with teaching teams and individual academics to apply more effective and efficient approaches to assessment. Because of the greater cultural and demographic diversity among postgraduate students and the smaller program enrolments (hundreds rather than thousands per program) the targeted consultations focused more on effectiveness than efficiency. These faculty-wide discussions, debates, workshops and consultations tested long held assumptions about assessment and highlighted possibilities for innovation and creativity. These initiatives created an environment appropriate for embedding good practice and created a culture appropriate for international accreditation.

Case study: Master of Commerce assessment review The postgraduate assessment audit coincided with a major review of the flagship postgraduate program, the Master of Commerce (MCom). After analysing MCom course outlines, the project team found that assessment of student participation in tutorial sessions and group-based projects were significant forms of assessment (see Figure 5.5) but expectations needed to be more clearly described using rubrics.

Strengths and weaknesses of the MCom assessment regime Strengths The project team found evidence of complexity and challenge in the assessment tasks. The course outlines demonstrated that academics have high expectations of their students and require

The Australian School of Business

ImprovingAssessmentText2Proof.indd 109

109

11/11/13 3:36 PM

Figure 5.5  Master of Commerce assessment task distribution in five core courses

Group work 13%

Tutorial preparation & participation 8% Weekly work/quiz 5% Mid-term exam 6%

Authentic tasks 22% Authentic tasks 28%

Final exam 31% Final exam 40%

Extended writing (individual) 6%

students to exercise responsibility for their own learning. Specifically, the compulsory core course in the MCom, Business Communication and Ethics in Practice, clearly specified how groups were to function, how students might exercise responsibility and how they were to be assessed. The Assessment Project team noted that strengths of the assessment regime for this program included formal assessment of student participation in tutorials (a signal of the importance of tutorial preparation); group work (to develop interpersonal capabilities, organisational and management skills, and intrapersonal awareness); weekly quizzes (useful for postgraduate students as selfpaced learning processes); and mid-term and final examinations (helpful as summative feedback if they are well-constructed). An analysis of the range of assessment tasks in the MCom program is provided in Table 5.1. The first assessment task in these courses occurred within the first five weeks, providing opportunities for students to obtain feedback on progress, learn to self-assess and improve their learning. The audit of course outlines indicated that assessment tasks

110 Part II

ImprovingAssessmentText2Proof.indd 110

11/11/13 3:36 PM

50 % 20 %

20 %

COMM5003

5%

COMM5004

10 %

COMM5005

10 %

8%

10 %

60 %

12 %

Total

8%

4.6 %

6%

40 %

6.4 %

Group work 30 %

60 % 70 %

22 %

Week submission of first assessment

COMM5002

40 %

No. of tasks

30 %

Authentic tasks

Extended writing

15 %

Final exam

15 %

Mid-term exam

Weekly work/ quiz

COMM5001

Tutorial preparation & participation

Course code

Table 5.1  Range of Master of Commerce assessment tasks in five core courses in 2011

14

2

3

5

15 %

5

4

20 %

11

4

10

5

13 %

tested a wide range of graduate attributes. This assumption was tested in 2012–13 during preparation for AACB accreditation, when all program learning goals were noted as taught, practiced and assessed on comprehensive curriculum maps.

Weaknesses The assessment team found that testing dominated the assessment regime in the MCom. As Table 5.1 shows, final examinations comprised 40 per cent of the assessment regime, rising to over 50 per cent when weekly quizzes and mid-term exams were included. This approach privileges those who demonstrate their learning through memorisation and tests, and does not develop additional and important graduate attributes such as oral communication and teamwork, which are essential for professional success in business. Written exams needed to be reviewed for readability and linguistic accessibility for students for whom English was not their first language. Exams needed to be designed to assess students’ higher order thinking and professional reasoning capacities.

The Australian School of Business

ImprovingAssessmentText2Proof.indd 111

111

11/11/13 3:36 PM

There was a significant use of group work but in some cases a need for more formative feedback and appropriately nuanced grading processes. Some courses appeared to over-assess students, with the number of assignments per course ranging from three to 14, including weekly regular exercises that combined to constitute single assessment tasks. Another weakness was minimal emphasis on developing extended writing skills through tasks that had a business focus. The introductory course, Business Communication and Ethics in Practice, introduced relevant writing skills to students and the capstone Business Project course included an extended live case analysis. However, the project team noted that a wider range of professionally oriented reports could be used, including investigations and evaluations, which could be designed to develop sophisticated writing skills, and also reduce opportunities for plagiarism and collusion.

Recommendations While there were some exemplary forms of assessment that engaged groups of students in authentic tasks, based on the audit, the Assessment Project team recommended that: ʶʶ Academics should be encouraged to reduce the formal testing of students, especially mid-term and weekly tests. They need to ensure that activities that develop and assess higher order reasoning and application of knowledge and skills to real world issues and interpersonal communication are included. ʶʶ Online tools for developing regular quizzes that support assessment as learning should be used. ʶʶ Group work should be taught and guided by a common set of rules and policy guidelines to ensure that students develop the skills to be successful in group learning and that the assessment grades are an accurate representation of a student’s performance. ʶʶ Resources should be provided to assist academics to ensure

112 Part II

ImprovingAssessmentText2Proof.indd 112

11/11/13 3:36 PM

that students read and act on feedback. ʶʶ Course coordinators should be encouraged to be more specific and selective in their alignment of tasks with graduate attributes. The faculty should integrate skills, knowledge, capacities and values across programs. As would be expected, these recommendations were consistent with the requirements of the international accreditation process.

Additional postgraduate level outcomes The audit of assessments in postgraduate courses had additional specific outcomes. For instance, in the School of Accounting, academics who attended the MCQ workshop conducted by the external consultant incorporated her recommendations for higher order learning through high quality design of questions. Specifically, the Lecturer-in-Charge (LIC) of ACCT5930 Financial Accounting worked with the external consultant to convert an in-class assessment task for 300 students to an online activity. The LIC combined a regular formative assessment (five online quizzes) with a large controlled condition summative assessment (mid-session examination) to develop and assess student learning practices in an introductory postgraduate accounting course with a large student enrolment. Following the mid-session exam, each of the 300 students received a grade within one hour of the end of the examination. Students also received detailed and specific feedback via email within 24 hours. Staff hours previously directed to marking were redirected to assessment design. The LIC received very positive feedback from the postgraduate students (Carson, 2011), as well as spending much less time on marking. The increased use of high quality online quizzes and MCQs in weekly tests and exams benefitted students and staff. This process highlighted the opportunities for further staff development in assessment design. Overall, the audit of postgraduate assessment highlighted that some key skills, including communication skills, teamwork and

The Australian School of Business

ImprovingAssessmentText2Proof.indd 113

113

11/11/13 3:36 PM

professional skills, needed to be assessed more comprehensively. In response, a program to build academic staff capability in assessment was offered. Building staff capabilities in assessment design

This Assessment Project used ‘assessment for learning’ rather than ‘assessment of learning’ as an underlying principle. The project team acknowledged that changing entrenched practices or approaches to assessment that were not pedagogically sound was a challenge and required systematic development of staff capability. With this in mind, the external consultant worked with learning and teaching staff in the ASB (including the ADE and L&T Fellow) to collaboratively offer a range of workshops at the faculty level to assist academics in diversifying their approaches to assessment. The topics targeted assessment issues that could lead to improvement in student learning in both undergraduate and postgraduate programs. In 2010–12 a total of 50 pre-workshop meetings, workshops and forums were conducted. Sample faculty wide workshops included the following topics: ʶʶ teaching and assessing large classes to improve learning ʶʶ assessing graduate attributes ʶʶ designing exams: multiple choice exams ʶʶ designing online exams and exams for higher order thinking ʶʶ assessing communication skills ʶʶ assessment and alignment (also for AACSB accreditation) ʶʶ developing rubrics and strategies for feedback (also for AACSB accreditation) ʶʶ ‘closing the loop’ in assessment ʶʶ assessing group-work ʶʶ using technology in assessment (ReView, mobile technology). One specific outcome of these interventions was the development of a comprehensive set of ASB assessment guidelines to encourage consistency and improved practice in efficiency and effectiveness.

114 Part II

ImprovingAssessmentText2Proof.indd 114

11/11/13 3:36 PM

Evaluation

These workshops encouraged more internal dialogue on assessment, with some striking outcomes. One school’s CATEI scores improved from the 50–60 per cent range in S1, 2010 to the 80–90 per cent range in S1, 2011. Across the ASB, a culture of experimenting with innovations in assessment was developed. Simulations, reflective writing and authentic resources were widely adopted and the use of Harvard Business cases increased from about 40 to 600 units in 2010–11. There is evidence of improved practice in measuring learning in relation to the graduate attributes. As an important and necessary precursor to accreditation, there has been a focus on standardised rubrics for formative feedback and assessing learning. MCQ exams that assess higher order thinking are now more widely used. To develop academic leadership capabilities in this area, assessment seminars were also held for heads of schools. These allowed heads of schools to become aware of assessment trends in their disciplines and allowed them to challenge assumptions about assessment within their schools. Efficiency

Measures of efficiency gains from the Assessment Project can be seen in the reduction in the number of assessment tasks and mid-semester exams in most schools. Exams have typically been reduced from 3 hours to 2 hours, and there has been an overall reduction in the length of essays and assignments. There is evidence of increased commitment to the use of the online environment. Five of the six first year undergraduate core courses used online quizzes to maximise timely and targeted feedback for large cohorts. Also, test banks were created for online tests and exams. GradeMark software was trialled in 12 courses. This software enabled staff to mark submissions online, and students to access the feedback online. The tool includes a rubric marking tool that allows staff to mark onscreen according to defined criteria, and facilitates

The Australian School of Business

ImprovingAssessmentText2Proof.indd 115

115

11/11/13 3:36 PM

timely marking and feedback. Some of these features are mirrored in the ReView software (Carroll, 2013), which has also proven to be relatively successful.

Part 2: Embedding changes to assessment using assurance of learning for international accreditation Building on the knowledge gained through the Assessment Project, in 2012 the ASB embarked on the final stages of an international accreditation process, incorporating assurance of learning (AOL) as a formal system to assure that students have met program learning goals. The AOL is based on a system of international learning standards, developed specifically for business schools (see the boxed section below). It is a systematic process, which clarifies and measures program learning goals (which map to graduate attributes) to ensure continuous improvement of programs. Assurance of learning standards for AACSB Assurance of learning is based on a set of international standards developed by AACSB, including the following:

Standard 15 Management of curriculums: The school uses well documented, systematic processes to develop, monitor, evaluate, and revise the substance and delivery of the curriculums of degree programs and to assess the impact of the curriculums on learning. Curriculum management includes inputs from all appropriate constituencies, which may include faculty, staff, administrators, students, faculty from non-business disciplines, alumni, and the business community served by the school.

Standard 16 Bachelors or undergraduate level degree: Knowledge and skills. Adapting expectations to the school’s mission and cultural circumstances, the school specifies learning goals and demonstrates achievement of

116 Part II

ImprovingAssessmentText2Proof.indd 116

11/11/13 3:36 PM

learning goals for key general, management-specific, and/or appropriate discipline-specific knowledge and skills that its students achieve in each undergraduate degree program.

Standard 17 The bachelors or undergraduate level degree programs must provide sufficient time, content coverage, student effort, and student–faculty interaction to assure that the learning goals are accomplished.

Standard 18 Masters level degree in general management (e.g. MBA) programs: Knowledge and skills. Participation in a masters level degree program presupposes a base of general knowledge and skills appropriate to an undergraduate degree. Learning at the masters level is developed in a more integrative, interdisciplinary fashion than undergraduate education.

Standard 19 Masters level degree in specialised programs: Knowledge and skills. Participation in a masters level program presupposes a base of general knowledge and skills appropriate to an undergraduate degree and is at a more advanced level.

Standard 20 The masters level degree programs must provide sufficient time, content coverage, student effort, and student–faculty interaction to assure that the learning goals are accomplished. source:

The changes to the assessment culture within the ASB described in the first part of this chapter have been embedded within the AOL process in the following ways: ʶʶ Academic staff, especially program directors, have participated in a range of internal curriculum mapping workshops to ensure that the streamlined ASB program learning goals are appropriately taught, practised and assessed.

The Australian School of Business

ImprovingAssessmentText2Proof.indd 117

117

11/11/13 3:36 PM

ʶʶ Program learning goals now incorporate honours and higher degree research programs. ʶʶ Appropriate technologies, including ReView software, are systematically used to embed criterion-referenced marking and AOL data collection. ʶʶ Program learning goals and detailed assessment descriptions are embedded in course outlines. ʶʶ Teaching resources for all learning goals, including teamwork and oral and written communication, have been developed and are available to academic staff via the intranet. ʶʶ Consistent with AOL principles, there has been an increasing emphasis on the program as the unit of analysis for continuous improvement, with program directors now responsible for analysing program level development of program learning goals and ‘closing the loop’ reports every semester to indicate their plans to improve assessment activities, based on systematic sampling of selected assessment tasks by course and by program. From S2, 2013, the ASB committed to reviewing programs against program learning goals every semester. Because this requires ongoing collection, analysis and interpretation of assessment data, staff chose to adopt various technologies to streamline these processes as much as possible (Carroll, 2013). Curriculum mapping of assessment plans

Workshops were conducted in each school prior to S1, 2013 to map the assessment of undergraduate program learning goals in the majors of the Bachelor of Commerce and in other undergraduate programs and to select assessment tasks for assessing for AOL. These workshops were led by the Associate Dean (Undergraduate) with an ASB learning and teaching adviser and were attended by staff responsible for the courses in the major or program. More than a dozen workshops were conducted in S1, 2013 to map the

118 Part II

ImprovingAssessmentText2Proof.indd 118

11/11/13 3:36 PM

development of postgraduate program learning outcomes (PLOs) and to select assessment tasks for assessing PLOs for AOL, in postgraduate masters coursework programs, including the AGSM MBA programs and MBT programs. Outcomes have included curriculum changes such as the inclusion of a capstone course or changes to the sequence of courses, provision of more explicit instruction and guidance (e.g. for written and oral communication), and of more opportunities for students to practise and receive feedback on their skills as they progress through a program or major sequence. The collaborative workshops provided a more comprehensive approach to curriculum mapping as a means to clarify program learning goals and to assure learning across all of the goals. Implementation of ReView software

ReView was introduced to streamline grading of AOL assessment tasks from S1, 2013. This allowed students to self-assess and for markers to provide feedback to students on each criterion. The software provided reports, and continues to provide such reports on students’ achievement of individual program learning goals (PLGs) and PLOs for analysis by program directors (Carroll, 2013). In S1, 2013, 14 courses with 62 staff and 2730 enrolled students used this system. Training and support were provided to staff via a series of newsletters and training workshops supported by the ASB educational designer (eLearning) and the educational technology coordinator. Online resources for implementing ReView were provided to staff on the ASB intranet. For example, teamwork resources, including rubrics, teaching tips, student resources (including a video and PowerPoint presentation), examples of ASB good practice and useful links and references became available for staff to use and to distribute directly to students as appropriate. Staff who research in areas such as teamwork have developed videos and other assessment resources for all other teaching staff to use.

The Australian School of Business

ImprovingAssessmentText2Proof.indd 119

119

11/11/13 3:36 PM

Documentation, roles and responsibilities

A working party to examine the role of program directors was established, and included the HOS of Accounting and Banking and Finance, the ADE and one program director. An audit of current school variations illustrated the need for more clarity in the role of program director, relative to the more traditional accepted role of HOS in academic matters. The program director role is seen as increasingly strategic to the ASB, given the renewed emphasis on programs, not courses, as the unit of analysis for continuous improvement. Program directors will be increasingly asked to collect and interpret AOL data and to engage academic staff and lead identification of curriculum changes. School presentations on AOL and criterion-referenced assessment

The ADE and the Learning and Teaching Adviser were invited to present at school staff meetings and Learning and Teaching Committee meetings to disseminate information about AOL, including embedding program learning goals in course design and assessment, curriculum mapping and processes for continuous improvement. The curriculum mapping workshops and AOL presentations allowed staff to engage with the principles of AOL and helped develop a culture of continuous improvement at the program level. Resources for each program learning goal were progressively developed, located and disseminated on the ASB staff intranet in 2013. Workshops on specific PLOs were conducted for academic faculty in early 2013, initially with a focus on undergraduate programs, including Teamwork; Oral Communication; Ethical, Social and Cultural Responsibility; Written Communication; Critical Thinking; and Global Perspectives. Workshops on the different expectations for undergraduate and postgraduate students will also be conducted later in 2013. So far, these workshops have resulted in cross-disciplinary sharing of resources and good practice. The following extracts from the ‘closing the loop’ reflective reports by

120 Part II

ImprovingAssessmentText2Proof.indd 120

11/11/13 3:36 PM

program directors illustrate plans for program level continuous improvement. Example 1: Bachelor of Information Systems: The Program Coordinator has met with LICs of all core UG courses in order to discuss the findings of AOL Cycle 1. In terms of content, students’ ability to communicate orally and in writing is now practised and assessed in more courses. In terms of assessment structure, we are assessing the (modified) program learning goals now in two central (capstone) courses.

Example 2: Master of Finance: Improvements to the Master of Finance include a widespread recognition of the critical need for teamwork and presentation skills for the finance profession, and the importance of teamwork and communication skills. Specifically, the LIC will work closely with the ASB Learning and Teaching Team to align assessment closely with PLGs and to integrate the relevant rubric into the conventional marking guide.

Example 3: Master of Marketing: In Semester 1, 2013 we introduced a day long induction, with a focus on course themes, written and verbal communication, teamwork and ethical and cultural issues. The codes of professional practice for the three professional associations are now explicit and used to reinforce ethical standards within the classroom, including citations and referencing. AOL is now viewed as continuous improvement.

The Australian School of Business

ImprovingAssessmentText2Proof.indd 121

121

11/11/13 3:36 PM

The impact of AOL on the ASB: Cultural change

The workshops discussed above have built on the work of the Assessment Project to prioritise teaching. Learning and teaching committees (and more academic staff) are reinvigorated through these system level changes and are now driving change within their schools. For example, the School of Marketing Learning and Teaching Committee has been reconstituted and the school is providing a small funding grant to encourage staff to conduct research into teaching. There has been a greater sharing of practice among staff and across disciplines through the learning goal workshops and subsequent AOL activities. Another example of cultural change is seen in the revised AGSM EMBA, which will include three new elements to specifically address AOL requirements. There are three individual management reports: Principles of Management, Innovations in Management and Strategic Management. Also, in the Bachelor of Economics, ethical issues are now being more formally assessed. Curriculum change

School based workshops to map the undergraduate curriculum have identified the need for some majors to improve sequential development of skills throughout majors. In some cases, the prerequisite structure has been changed to enable more effective sequential development. Teaching of ethics, as one example of a program learning goal, is made explicit at the undergraduate and postgraduate levels in Managing Organisations and People (UG) and in Teams and Ethics for Competitive Advantage (PG). Assessments have been modified to clearly assess all elements of the program learning goals, including oral communication and teamwork (process). The AGSM Master of Business and Technology program has included a new Leadership program assignment to more clearly measure progress against the program learning goals. The Bachelor of Economics, while focusing on knowledge and critical thinking, now includes a critical mass of written, oral and group assignments early in the program.

122 Part II

ImprovingAssessmentText2Proof.indd 122

11/11/13 3:36 PM

Challenges

The Assessment Project was a change management intervention into components of the workload of academic staff. It highlighted some beliefs of assessment as measurement of ‘what’ has been learned, not measurement ‘for’ learning. In some cases, quality was confused with quantity, as seen in the length and number of assessment tasks. Some disciplines associated quantity and length of exams with ‘rigour’. For many staff, actively researching within a research active institution, there is minimal time for assessment training and development, including attendance at targeted assessment workshops. For others, there is a perception of lack of time for investing in innovations or a potential lack of confidence in the use of innovative technologies. Some staff expressed concern about technology and the impact of innovations in assessment on student feedback, as for example in the university-wide CATEI data. In some areas, there are perceptions of insufficient resources, notably in concerns about infrastructure, with some lecture theatres and occasionally seminar rooms not being appropriately equipped for new technologies. This was addressed in 2012 and 2013 with targeted, university-wide investment in Echo 360 technology across all appropriate teaching spaces. Even with the interventions described in this chapter, summative assessment, rather than formative assessment, typically remains as the key approach to assessing student learning across the disciplines in the ASB. Improving assessment is an ongoing challenge, as universities continue to contend with the nature, focus and expectations in higher education. As new technologies and modes of delivery emerge, both the university and the faculty need to monitor and learn from developments in learning, teaching and assessment, to ensure the best possible student learning journey. This Assessment Project and the subsequent assurance of learning activities have engendered a range of opinions from academic colleagues, but these interventions have created a significant step forward in the ASB assessment regime, from both a course and a program

The Australian School of Business

ImprovingAssessmentText2Proof.indd 123

123

11/11/13 3:36 PM

perspective. They have also helped to create a culture of commitment rather than compliance in assessment policy and practice.

Conclusion: Sustainable achievements from these projects The faculty-level assessment initiative was successful in achieving many of its aims. It sought to draw on pedagogical theory (Flood, Coleman and Marshall, 2013) to undertake a systematic audit of assessment practices in each school in the ASB to provide an insight into assessment practices and to generate a database of baseline activities and approaches. The findings were used to raise greater awareness of assessment practices and improve measures of effectiveness (to improve the student experience) and efficiency (to reduce academic workload). The project established a clear set of principles and guidelines to ensure good practice in assessment. The initiative has also encouraged innovations in assessment, including strategies for facilitating higher order thinking, developing graduate attributes and program learning goals and incorporating the use of technology. The project provided a comprehensive program of training and development for academics to ensure implementation of good assessment practice within the disciplines and across the different levels, and to enhance the coherence of assessment across the faculty. Overall, the project served to strengthen the alignment between faculty assessment processes and practices and the university’s assessment policies, procedures and aspirations. From a strategic perspective, the Assessment Project set the context for the ASB’s approach to assurance of learning for AACSB accreditation, which was achieved in June 2013. Collectively, these experiences have acted as a focus for continuous improvement of undergraduate and postgraduate programs. The initiatives strengthened staff confidence to experiment with innovative and creative approaches to assessment. These experiences have also taught the ASB that contemporary learning and teaching models require

124 Part II

ImprovingAssessmentText2Proof.indd 124

11/11/13 3:36 PM

academic staff to take increasing responsibility for all aspects of the student learning journey, including curriculum mapping, in order to create an environment where students can achieve their program learning goals and eventually, their career aspirations. Academic staff are required to be professional educators as well as professional researchers. Both roles require significant investment in ongoing training and development. These projects have helped the ASB to clarify assessment quality as a vital and measurable component of program quality. The ASB continues to seek efficiency, effectiveness and creativity in its assessment strategy. Teaching staff continue to develop creative assessment tasks as a component of curriculum mapping for AOL. The ASB relies less on examinations as a primary form of assessment, has less ‘over-testing’ and now encourages a more creative approach to the design and implementation of assessment. The AOL process is now embedded in the standard practice of academic marking. This journey continues.

References AACSB, The Association to Advance Collegiate Schools of Business. Available from . [Accessed 21 April 2013.] Carroll, D (2013) ‘e-Assessment in Australian Higher Education’, in K Coleman & A Flood (eds) Marking Time: Leading and managing the development of assessment in higher education, The Learner, Champaign, Illinois, 143–59. Carson, E (2011) ‘Online Formative and Summative Assessment in Masters Level Financial Accounting (ACCT5930)’, Semester 2, 2010, School of Accounting, Australian School of Business, University of New South Wales, Report to Associate Dean (Education). Flood, A, Coleman, L & Marshall, S (2013) ‘Introduction: Assessment in higher education’, in K Coleman & A Flood (eds) Marking Time: Leading and managing the development of assessment in higher education, The Learner, Champaign, Illinois.

Acknowledgments The authors would like to acknowledge the ongoing dedication of our colleagues, all of whom made significant contributions to this

The Australian School of Business

ImprovingAssessmentText2Proof.indd 125

125

11/11/13 3:36 PM

chapter: Carolyn Cousins, Professor Elizabeth Carson, Professor Greg Whitwell, Professor Chris Adam, Dr Jennifer Harris, Chong Eng Tay, Amanda Lockett, Associate Professor Peter Roebuck, Professor Jan Orrell, Ann Wilson, Peter McGuinn, Danny Carroll and Ginette Farcell and all the staff of the ASB who have contributed to the continuing success of these projects.

126 Part II

ImprovingAssessmentText2Proof.indd 126

11/11/13 3:36 PM

6

The Faculty of the Built Environment: Improving approaches to assessment through curriculum renewal Nancy Marshall and Lisa Zamberlan

The Faculty of the Built Environment (BE) at UNSW has a renowned education and research profile that is driven by a focus on the ‘design, construction and management of the 21st century city’. The faculty has a breadth of disciplines unique to the region, providing an ideal framework to support research and education on complex urban themes. The BE’s 71 full-time academic staff and 10 research centre staff deliver 14 degrees within the disciplines of Architecture, Architectural Computing, Interior Architecture, Landscape Architecture, Industrial Design, Planning, Building/ Construction, Property and Development, Urban Development and Design, and Sustainable Development. This suite of undergraduate and postgraduate degrees is offered to approximately 2300 undergraduate students, 550 postgraduate coursework students and 120 higher degree research students. In the Faculty of the Built Environment, the UNSW Assessment Project formed part of a much broader curriculum renewal process, which involved a strategic repositioning of the faculty. The faculty had begun a major strategy to review and rethink its

127

ImprovingAssessmentText2Proof.indd 127

11/11/13 3:36 PM

research and education profiles. The intention of the repositioning was expansive change in the offering of new and renewed curriculum initiatives, to ensure a more distinctive student learning experience with globally focused, research-led, interdisciplinary and professionally accredited curriculums. It was at this juncture that the central Assessment Project commenced and was very quickly aligned to the broader BE repositioning process and became an institutional driver of curriculum redevelopment. The Assessment Project also enabled improvements in approaches to assessment and practices across the faculty. It enabled the revision of student learning outcomes across the disciplines; aligned assessment with program and course learning outcomes; implemented assessment protocols to establish quality, parity and consistency in assessment practices across the faculty; and generated research on innovative assessment practices in the BE which has informed the development of particular courses. Overall, this chapter reflects on how the BE used the Assessment Project to further its curriculum renewal process and ensure that the faculty’s unique assessment philosophy and approaches were implemented in each of the 14 degrees in the faculty. The opportunity afforded in this institutional level project was in the implementation of large-scale curriculum change across the entire faculty. The chapter begins by identifying the key drivers of curriculum change, followed by a description of the unique context for learning and teaching in the faculty. It then outlines the program review process that acted as the intersection between the much needed curriculum change and the Assessment Project. The Assessment Project had three goals in the BE, outcomes and impact of each goal are outlined and then combined to demonstrate their impact as enablers of the curriculum renewal process. The chapter concludes with a discussion of the challenges experienced and some of the barriers to the project.

128 Part II

ImprovingAssessmentText2Proof.indd 128

11/11/13 3:36 PM

The drivers of curriculum renewal in the faculty The arrival of a new dean in 2009 meant that the faculty’s curriculum (and research) was under review and up for renewal. The faculty started with a comprehensive examination of its learning and teaching and research performance metrics for benchmarking purposes. This self-assessment identified broad curriculum, research, financial and operations issues, gaps and opportunities, all of which were drivers for change of some sort in the BE. This chapter focuses on the drivers of curriculum change and the results of initiatives to deal with those curriculum drivers. The faculty believed the establishment of a graduate school would be one way to help with its repositioning strategy. A graduate school meant that new postgraduate (PG) programs were established and all current programs underwent a major review process. Results from a postgraduate student survey suggested that the PG programs on offer could be greatly improved. PG students wanted a better integration of teaching and research and wanted to be more market-ready by the time of graduation. The faculty’s facilities were at capacity and so the redistribution of courses across three sessions was, especially at the postgraduate level, designed to alleviate space pressures. Facilities were overused and the curriculum structure did not allow the best use of studio space. Many of BE’s programs, not just those to be included in the new graduate school, were overdue for their regularly scheduled review process as required by UNSW policy or by relevant professional accreditation agencies. Some programs had not been reviewed for five or more years. Additionally, many of the PG programs needed to be altered in structure and curriculum content to comply with the Australian Qualifications Framework. As mentioned above, very few of BE’s programs (undergraduate or postgraduate) had an articulated set of graduate attributes. Most did not have program learning outcomes or course learning outcomes with aligned assessment tasks. Some programs did not have

The Faculty of the Built Environment 129

ImprovingAssessmentText2Proof.indd 129

11/11/13 3:36 PM

sufficient internal staff capacity to deliver the proposed curriculum and so were vulnerable to sessional workforce availability. The BE was offering some programs that did not have a reasonable prospect of a sustainable income stream. Entrance ATAR scores were holding steady or falling relative to our competitors and a couple of the programs had substantial attrition rates after year one. Within BE programs, many courses needed to be redesigned and reconceptualised to address the lack of appropriate learning outcomes, assessment expectations and their place in the sequence of course offerings and prerequisites. While the BE was offering courses that were quite popular with students, in terms of quality, many did not align with the faculty’s strategic direction, did not fit with the faculty’s core values, or were very expensive and working outside current operating budgets. Although many staff in the faculty knew the value of program coherence, few had pursued program, course and assessment task alignment, and many were not trained in this area. In summary, the ideal pedagogical targets of the BE, identified from many sources – students, staff, industry and educational experts – were not being met. The research and teaching nexus needed improvement, facilities needed to be used differently and administrative acumen was needed to support the operations of the faculty. The faculty needed to address these issues and did so by initiating its major repositioning strategy via a curriculum renewal process. It saw the great potential for leveraging its strengths, conducting improvements, creating change and enhancing its profile. In response to unfavourable metrics, internal studies and analyses of issues and opportunities, the repositioning strategy was articulated in a living document entitled ‘Built Environment Program Directions 2011–2015’. This document represented the faculty’s aspirational curriculum map and renewal process. It represented a future vision for the faculty; articulated a set of faculty core values which clearly stated ‘who we are’ and hence ‘how we shape our students’; set out a process for program review and achievement of the

130 Part II

ImprovingAssessmentText2Proof.indd 130

11/11/13 3:36 PM

pedagogical targets noted above; and, as mentioned above, used the Assessment Project to ensure that each program had an articulated set of graduate attributes, program learning outcomes and course learning outcomes with aligned assessment tasks. Since the BE’s major strategic repositioning was already underway, the decision was taken to leverage the Assessment Project as an institutional enabler of the curriculum change and program review process.

The learning and teaching context in the faculty To understand the unique context in which educational delivery occurs in the BE, one needs to understand the basic tenets of the faculty’s teaching platform and teaching practices, the nature of the learning spaces and the approaches to assessing learning across the disciplines. Education in the faculty is focused on professional degree programs, with learning designed to parallel activities in industry practice, thereby maximising student engagement with industry-relevant content and the skills and capacities expected in the professional environment. The majority of the core teaching and assessment carried out in BE disciplines involves research, collaboration, critical review, peer learning and reflection, and the application of knowledge to real world or simulated contexts. In the faculty’s design-based programs (which constitute the majority of programs), assessment and feedback in the design studio is both formative and summative. Each week, students are offered formative feedback on the progression of their project work towards final submission. Staff and students engage in discussions about progress, issues and strategies in order to advance the work for assessment. Learning and feedback modes in the studio include one-on-one feedback, group discussion, hands-on prototyping workshops and both informal and formal presentations that mimic the professional practice environment. Summative feedback occurs in the format of a presentation of a particular stage of the design

The Faculty of the Built Environment 131

ImprovingAssessmentText2Proof.indd 131

11/11/13 3:36 PM

proposal to a group of peers, industry experts and studio staff. In both formative and summative assessments, feedback occurs in real time and in public. The close contact and the public nature of the learning are captured in Figure 6.1. Figure 6.1  Students working on projects in the design studio at the Faculty of the Built Environment

This approach to learning and assessment, involving authentic practice-based learning activities, is considered key to the professional focus at the BE. However, there was concern in the faculty that such an entrenched learning and teaching culture often results in learning outcomes being implicitly understood by staff, but not always successfully articulated and communicated to students. Similarly, expectations about student learning are often associated with cultural codes of conduct related to professional disposition and are often not clearly articulated in the form of learning outcomes. Studio learning and teaching practices, in particular, have a significant Western European history and studio teaching

132 Part II

ImprovingAssessmentText2Proof.indd 132

11/11/13 3:36 PM

continues to be a closely guarded setting in design education. As such, these modes of learning and teaching are at risk of being impenetrable to review or renewal. Knowing the issues and challenges of the design studio and the assessment that occurs in this context, a strategic review of the design studio was requested by the dean and conducted by senior professors in the faculty, signalling the importance of the review and the determination for change. This review became part of the broader curriculum renewal process. As suggested above, the design studio contributes to the unique learning and teaching context in the BE. Issues identified in the review of studio teaching stated above, as well as the problems associated with poorly articulated learning outcomes at both the program and course level, were contributing to a bigger set of challenges the faculty was facing. The following section provides a comprehensive faculty perspective on these challenges and the drivers of curriculum change. It provides the rationale for repositioning the faculty and taking a systemic approach to the Assessment Project as it renewed and revitalised its curriculum.

Curriculum change and the Assessment Project Briefly, the program review process was central to effecting change and overall curriculum renewal in the BE. The overall objective of the program review process was to achieve the faculty’s pedagogical targets and ameliorate the curriculum issues that existed (as described above). The BE executive established a faculty-wide program review schedule which included each of the faculty’s 14 degree programs. With a remit to drive the educational change initiatives in the faculty and direct the Assessment Project, the Associate Dean (Education) (ADE) established a collaborative working team of experts to support BE staff. This education team was the equivalent of 4.5 full-time staff and was assigned to facilitate and support the Program Review Process and the Assessment Project by working closely with program review teams. (The team comprised

The Faculty of the Built Environment 133

ImprovingAssessmentText2Proof.indd 133

11/11/13 3:36 PM

learning and teaching fellow Lisa Zamberlan (sponsored by UNSW and the faculty), and a group of experts in higher education including Professor Janice Orrell, chief consultant on the UNSW Assessment Project; Dr Stephanie Wilson, with extensive experience in learning and teaching in higher education; educational change consultant Dr Larry Hulbert; and project research support by Dr Katina Dimoulias and Melissa Rowley. High-level administrative support was provided by Monica McNamara and her colleagues in the BE’s Student Centre.) Program review teams were established for each existing and proposed new degree, consisting of program staff and supported by the office of the ADE. Program review teams were required to be involved in ‘share points’ to provide opportunities to compare progress, share ideas, exchange innovations and to report recommendations for curriculum renewal through the faculty and UNSW committee structures. Teams met regularly over a period of several months to articulate graduate attributes, program learning outcomes and review and (re)articulate course learning outcomes. Each team also investigated the relationships between course outcomes and assessment tasks. In particular, teams were encouraged to consider whether or not current assessment tasks were appropriately aligned with the intended course and program learning outcomes. The program review meetings became a vehicle for constructive discussion on assessment efficiencies, effectiveness and future innovation in the BE. To assist with the program review process, a detailed set of guidelines was developed to outline the purpose, guiding principles, terms of reference, resources and support for the BE program review process. The guidelines, which were accompanied by a set of supporting resources, were intended to focus discussion on different aspects of curriculum development and provide support for an investigation into current assessment practices. To support the review process, ‘data packs’ were created to benchmark data specific to each degree program. These packs provided information on all

134 Part II

ImprovingAssessmentText2Proof.indd 134

11/11/13 3:36 PM

aspects of the BE’s current degrees, including the challenges and aspirations that were documented by full-time staff as part of an exercise that analysed program strengths, weaknesses, opportunities and threats (SWOT analysis); longitudinal student enrolment figures; staff to student ratios by program; graduate employment upon completion of their degrees; relevant accreditation policies and guidelines; current entry requirements; longitudinal course and teaching evaluations; and, finally, summaries of program-specific student satisfaction data from the UNSW Postgraduate Coursework Student Experience Survey. These packs also included Australian industry competitor information, disciplinary-specific assessment trends and international best practice on similar degrees.

Curriculum renewal and the Assessment Project The Assessment Project became an institutional driver of curriculum redevelopment for the purpose of repositioning the faculty. Institutional funding and the increased focus on quality assurance through assessment across the UNSW campus afforded the BE greater capacity to engage in its strategic review of its curriculum and its specific assessment approach and practices. The Assessment Project enabled program coherence (via assessment task alignment with course and program learning outcomes and graduate attributes), and the introduction of assessment efficiencies and effectiveness across the different programs in the faculty. The ADE and the Learning and Teaching Fellow identified three key goals for the Assessment Project: 1 to enable the BE’s strategic curriculum renewal process 2 to review assessment procedures and protocols for improvement across the BE 3 to support research into innovative assessment practices. Each of these goals is discussed below.

The Faculty of the Built Environment 135

ImprovingAssessmentText2Proof.indd 135

11/11/13 3:36 PM

Goal 1: Review of assessment to enable curriculum renewal

The opportunity to audit and examine (and then redesign) assessment practices and tasks for each course in each program was considered to be a major lever for curriculum renewal in the faculty. The audit documented and analysed assessment tasks across all years of each of the BE’s 14 degree programs. This involved the collation and analysis of the assessment information available in all 250 course outlines, which included course aims, content and assessment tasks offered in the faculty in a one-year period. This data informed two major reports written by Professor Janice Orrell who summarised assessment practices and trends across all courses and programs. Her final reports were distributed to the BE executive and all program directors for appraisal and for use as a formal part of the program review process. Consultations with staff were held with Professor Orrell, staff from the LTU and BE’s education team. These reports and meetings offered program staff a platform to discuss their current assessment philosophy and practices, to enter into a debate about what assessments should occur across the stages of a degree and to review the purpose of assessment in the BE disciplines. As an example, the information in Figure 6.2 provides a summary of assessment tasks in all of the undergraduate core courses examined. Similarly, the information in Figure 6.3 provides a summary of assessment tasks in all of the postgraduate elective courses examined. It should be noted that additional information about assessment may have been made available to students in class but the logistics of accessing this when conducting a whole-of-program review made it impracticable to be included in this audit. As can be seen in Figures 6.2 and 6.3, the review of assessment indicates that a significant proportion of assessment tasks in both undergraduate and postgraduate courses comprised projects and studio projects incorporating a wide range of authentic tasks related to professional practice. While these tasks were commended for being consistent with the integration of authentic learning for the

136 Part II

ImprovingAssessmentText2Proof.indd 136

11/11/13 3:36 PM

Figure 6.2  Faculty undergraduate programs core courses  assessment task distribution (Orrell, 2011a) Unspecified 6.4% End-of-semester exam 7.8%

Individual tasks 21.7%

Mid-semester exam 2.2% Research 7.2% Lab-based 4.7% Extended writing 8.3% Studio project 15.9% In-class assessment 4.1% Participation/attendance 2.7% Group work 0.1%

Project 17.1%

Reflective journal 2.0%

Figure 6.3  Faculty postgraduate programs elective courses   assessment task distribution (Orrell, 2011b) Unspecified 5% Individual tasks 15%

Exams 6% Research 4%

Lab-based 11% Extended writing 13%

Studio project 4% In-class assessment 6% Group work 1%

Project 35%

The Faculty of the Built Environment 137

ImprovingAssessmentText2Proof.indd 137

11/11/13 3:36 PM

broader professional context, Professor Orrell recommended that: ʶʶ students should be provided with more explicit criteria by which their performance is assessed ʶʶ the development of students’ capacity to work in teams be made assessable ʶʶ there ought to be formative development of learning through feedback. Professor Orrell suggested that the criteria for assessing the creative product may not be easily understood by students. In addition, she stressed the need to develop more advanced levels of assessment to facilitate higher order learning, with an increasing emphasis on research, independent learning and teamwork, as students progress from their first to final years of study. For all programs, she suggested that the faculty adopt program-specific graduate attributes, program learning outcomes, and demonstrate clear alignment between assessment tasks and course learning outcomes. She further suggested that the BE ensure that assessment was consistent and contributed to incremental development of professional skills. In support of the faculty curriculum renewal project, the review of all forms of assessment highlighted traditional and often tacit educational paradigms, while facilitating debate on appropriate teaching practices and assessment in professionally based degree programs. Redefining the student experience, articulating learning outcomes at a program level and supporting learning progression became fundamental requirements of the curriculum renewal process. In both her reports, Orrell (2011a; 2011b) highlighted a need for programs to align learning objectives at the course and program levels with descriptions of the student experience at the faculty level. In the program review meetings, staff reframed disciplinary assessment approaches based on program priorities and the assessment review reports. That is, program staff developed completely new assessment tasks to align with newly articulated course and program learning outcomes. Reporting mechanisms within the

138 Part II

ImprovingAssessmentText2Proof.indd 138

11/11/13 3:36 PM

program reviews, together with the institutional emphasis on assessment through the relevant educational committees, ensured teams met the faculty targets and aligned explicit program and course learning outcomes through assessment practices. The major outputs achieved from the pursuit of this goal, beyond the completed program reviews, included factual data packs which included metrics on assessment as well as other learning and teaching measures; reports and high-level analysis on assessment practices in all BE degrees; extensive meetings which were the platform for discussing, defending and debating assessment approaches and practices; and the inherent value of an education team based in the faculty to lead informed discussion and advance change to curriculums and assessment. At the time of the writing, the program review process had resulted in major program and assessment revisions to 50 per cent of the undergraduate programs and to 45 per cent of the postgraduate programs. For completed program reviews, courses have been redesigned with each assessment task aligned with learning outcomes at the course and program levels. Graduate attributes have also been articulated for each program and align with the faculty’s core values and pedagogical targets described earlier. Reviews for all remaining degree programs are in progress and are due for implementation in 2014-15. A number of new programs are also in development. In each case, discussion throughout the development process has focused on the articulation of program learning outcomes and graduate attributes, and the alignment between course-based student learning outcomes and assessment tasks. Incorporating this level of alignment in course and curriculum mapping ensured that: Where courses are co-taught or use tutors, the course information can be easily accessed to ensure shared understanding about course aims, intended learning outcomes and assessment tasks and marking criteria. Program and course coordinators

The Faculty of the Built Environment 139

ImprovingAssessmentText2Proof.indd 139

11/11/13 3:36 PM

can [also] use the information in ensuring that assessments are complementary, holistic and develop student capabilities at appropriate stages. (Orrell 2011a, p.2)

The information in Orrell’s two reports (2011a; 2011b) was comprehensive and rich in substance. Rhetoric inside and outside the faculty was that BE staff over-assess, but the Assessment Project and Orrell’s report indicated that BE staff, in general, do not over-assess. A closer look suggests that perhaps some staff ‘over-teach’, especially in the design studio, and assessment tends to be split into many different types of tasks, clearly evident in Figure 6.2. As a result of analysing the design studios, some have had contact hours reduced, with no loss of teaching quality, student learning or assessment tasks. This shift will be standardised across all disciplines, which will produce significant teaching savings to the faculty over the long-term as discussed later in the chapter. Additionally, Orrell’s reports highlighted the importance of interdisciplinary learning and collaboration and recommended a focus on the development of associated assessment practices. The integration of interdisciplinary learning into the curriculum has been undertaken as a faculty priority and represents an important component of the faculty’s future direction. The immediate result has been that all senior students are now required to enrol in a suite of authentic core courses: BE Design Competition; Engagement – Local or Global; BE Inquiry and Research; and Preparation for Practice. Learning and assessment tasks focus on professional collaboration, with reflection on discipline contribution in interdisciplinary contexts. In summary, the Assessment Project was a timely enabler of the faculty’s curriculum renewal process, which saw 14 programs being redesigned. As a result, the BE delivers much better and more coherent programs, with progressive and authentic assessment tasks. The impact of reviewing all assessment and ensuring

140 Part II

ImprovingAssessmentText2Proof.indd 140

11/11/13 3:36 PM

curriculum renewal should not be underestimated. Students are receiving improved quality in their learning, staff have developed a better understanding about why and how they assess and, consequently, have improved their teaching. Ultimately the faculty’s reputation will improve as the marketplace recognises the higher quality degrees and market-ready professionals. Goal 2: Review of assessment procedures and protocols for improvement

In addition to the faculty-wide review of its assessment as an approach to review and redesign its curriculums, the faculty recognised the need for a systemic and consistent approach to assessment procedures and protocols across all courses. The education team led extensive discussions on BE assessment procedures and practices with the faculty’s research and curriculum forum (consisting of the associate deans, heads of discipline and all nine program directors). These discussions focused on faculty-wide opportunities for parity and efficiencies in assessment. In 2012, an assessment protocol document was adopted by the faculty’s Education Committee to ensure consistent approaches were taken in relation to class attendance, assessment submissions, late submission penalties and special consideration procedures. These are now included in all BE course outlines. Further, a sessional handbook is now distributed annually to casual and new teaching staff, which highlights UNSW assessment policy and procedures, and BE protocols regarding teaching and assessment practices. Workshops and forums for sessional and full-time staff continue to address topics such as ‘assessing creativity’ and ‘interdisciplinary group assessments’, further contributing to the goal of having consistent and fair assessment protocols in the BE. Although this was a relatively modest goal, the BE now has a fair and consistent protocol for assessment across the faculty and continues to offer the opportunity for discussion and training among its teaching staff – all as a result of the university’s Assessment

The Faculty of the Built Environment 141

ImprovingAssessmentText2Proof.indd 141

11/11/13 3:36 PM

Project. The impact is increased clarity and understanding among BE students. Staff are clearer in articulating their expectations to students and have a formal faculty protocol to support their stand on assessment practices. An unintended consequence of this goal is that full- and part-time staff can and are willing to discuss and debate assessment and have a legitimate platform from which to do so. They understand the BE’s current practices, especially as a result of Professor Orrell’s work, and have applied that practical knowledge and the normative concepts to their programs and are now willing to innovate. Furthermore, there has been clear evidence of improvements brought about by the introduction of the new assessment protocols. For example, the Interior Architecture program has focused on creating clear and transparent course and program learning objectives that sequence through the stages of the degree. With a particular focus on articulating objectives and performance criteria in the design studio, this program has framed a new pedagogy for learning that now benchmarks the structures of engagement and assessment in design studio courses. The Interior Architecture undergraduate degree program has committed to creating assessment tasks that support research, creative exploration, experimentation, peer learning and self-reflection in design studio courses. Course objectives in the program now clearly articulate these values, with tasks designed to promote learning through explorative processes. Criteria for each assessment task have been rewritten to value these objectives. Traditionally, assessment in the design studio has been weighted significantly toward the production of a singular final design proposal (worth approximately 70 per cent of the entire course). Now, after the curriculum renewal process, the Assessment Project and new assessment protocols, in the formative years of the four-year degree program, design studio assessment is split to reflect evidence of developmental learning through exploration, experimentation and reflection. Emphasis on the final product has been redirected to

142 Part II

ImprovingAssessmentText2Proof.indd 142

11/11/13 3:36 PM

a portfolio-type assessment on formative learning development. Reflective statements are also now included as a core assessment component in most studios across the program. Peer feedback in assessment has also been formalised across the degree with assistance given in developing and receiving feedback among peers. In 2013, following a SOTL action research grant on feedback in the design studio, academic staff in the Interior Architecture program have been trialling peer video feedback and assessment systems in the graduating design studio course to further develop professional approaches to presentation and reflective critique. Assessment parity sessions have been formalised across the program at the end of each studio course, in which academic staff review assessment standards both for equity and as a benchmark against stated course and program objectives. These sessions enable a continuous review cycle in which tasks and learning structures are tested and assessed for merit. Finally, research as it applies to design has been included as an assessable component of each studio course and mapped across the degree to sequentially advance expectations on performance and create pathways for postgraduate studies. Goal 3: Assessment innovation

The final goal of the Assessment Project in the faculty was to increase SOTL research in the area of built environment assessment. Supported by the office of the Associate Dean (Education) and funded by the university Assessment Project, assessment-focused action research grants promoted innovation in assessment practices in BE disciplines. These seed grants provided funds for staff to examine assessment within a defined project area. Sample topics included: ʶʶ assessing student online design blogs ʶʶ supporting self-directed learning in undergraduate courses to improve student satisfaction ʶʶ student-centred game based learning for construction management and property teaching

The Faculty of the Built Environment 143

ImprovingAssessmentText2Proof.indd 143

11/11/13 3:36 PM

ʶʶ signposting feedback and critical reflection in the design studio ʶʶ cultural, disciplinary and instrumental knowledge in interdisciplinary and cross-cultural design studio learning and teaching. The grant recipients presented their findings to the research and curriculum forum as a mechanism to share information and have made changes to their own assessment practices. They have also propelled assessment debates and best practice benchmarking within the faculty. To extend the reach of this research support, a scholarship of learning and teaching e-repository, including over 250 publications and an endnote bibliography, was established and is available to all BE staff. The repository is designed to enable sharing and further research on issues pertinent to BE assessment practices and other teaching and learning scholarship. In summary, as a result of the Assessment Project, the faculty is still offering action research grants to its staff and is conducting research into its own assessment practices, looking for innovative assessment solutions to the BE’s design-based programs. The goal and its outputs has increased interest in assessment as an area of SOTL research, added modestly to the BE’s research quantum and should provide a cycle of innovation in assessment practice.

Challenges of the Assessment Project The central UNSW Assessment Project commenced at a time when the faculty had just started a repositioning exercise and so it was possible to introduce the project into the BE as an enabler rather than an imposition. A systemic approach was taken – as a result, the curriculum renewal process and the Assessment Project became very intertwined and, at times, complex. Big-picture, strategic thinking and change were difficult to explain and rationalise in holistic ways. Furthermore, these changes constitute continue long-term investments and impacts will not be seen in the short term.

144 Part II

ImprovingAssessmentText2Proof.indd 144

11/11/13 3:36 PM

The systemic approach to the program review process and the Assessment Project required all staff in the faculty to engage in discussions about teaching and assessment practices from a scholarly perspective. This level of engagement was considered unprecedented in the faculty and represented an important step towards building a more active learning and teaching culture that involves the ongoing sharing of issues and practice. A limitation to this approach was that some staff members questioned the need to review their own assessment practices, critique their courses and accordingly make changes to their programs. Some staff simply questioned the need for change! The education team worked diligently to allay fears relating to changes that were occurring in the faculty and sought to promote understanding of the time and energy needed for curriculum change. Some staff were more receptive to this approach than others. Ongoing budgetary pressures and the cessation of funding from the Assessment Project has limited the speed at which the remaining programs are being reviewed. Small-scale projects with potentially significant impacts may be curtailed in the future due to the funding cutback.

Final reflections The Assessment Project was an integral driver in the process of curriculum review and renewal in the faculty. It provided funding and a platform that legitimised detailed data-gathering, which acted as a stimulus for productive discussion and subsequent change. Overall, the Assessment Project, in tandem with the program review process, contributed to improvements in curriculum development and assessment practices across the faculty. Some of the specific improvements included clearer alignment between learning outcomes at the program and course levels; appropriate assessment tasks aligned to learning outcomes; the review and implementation of assessment policies and student protocols to establish quality, parity and

The Faculty of the Built Environment 145

ImprovingAssessmentText2Proof.indd 145

11/11/13 3:36 PM

consistent assessment practices across the faculty; and research into innovative assessment practices in the BE. Changes such as these not only serve to improve the student experience but also have the potential to alter the educational landscape within the faculty. While the faculty is still undergoing an intense period of program review, the future focus will be to sustain best practice in learning, teaching and assessment within programs. The cumulative impact of the Assessment Project in the Faculty of the Built Environment will only be fully evident when the new and revised programs and courses have been fully implemented and evaluated.

References Orrell, J. (2011a) Built Environment Undergraduate Assessment Report, UNSW, Sydney. Orrell, J. (2011b) Built Environment Postgraduate Assessment Report, UNSW, Sydney.

146 Part II

ImprovingAssessmentText2Proof.indd 146

11/11/13 3:36 PM

7

The COFA experience: Assessment reform and review Graham Forsyth

This chapter explores the impact of a central university-wide innovation on a creative arts and design faculty, the College of Fine Arts at UNSW. It highlights the diversity in responses to the centrally initiated Assessment Project across the faculty, the impact of disciplinary perspectives and perceptions of quality in assessment. In doing so, it demonstrates the challenges faced in seeking to achieve consistency in implementation of the Assessment Project, including shifting entrenched assessment practices, addressing specific issues arising from the faculty’s disciplinary mix, and ensuring appropriate leadership and capacity for change. The chapter highlights some of the improvements resulting from the Assessment Project in relation to effectiveness and efficiency and identifies some of the ongoing change management challenges.

Background The College of Fine Arts (COFA) is a medium-sized faculty relative to most other faculties at UNSW, with four disciplinary schools, two research centres, 3850 students, 85 academics and researchers and 55 professional and technical staff. At the national level, it is one of the largest art and design faculties in Australia, one of the few

147

ImprovingAssessmentText2Proof.indd 147

11/11/13 3:36 PM

that is a stand-alone entity, and the leading art and design faculty in terms of research output. In comparison to peer institutions, COFA is characterised by a diversity of disciplines, diversity in approaches to teaching and assessment and a higher level of research output by academic staff. This diversity is clearly evidenced by the range of undergraduate and postgraduate degrees in the traditional fine arts, as well as in design, digital media, art and design education, art history and theory, museum studies and art administration. While the diversity has brought richness to teaching across the disciplines, it has also meant that COFA has had difficulty in establishing a broad and overarching set of guiding principles to ensure consistency in quality and approaches to teaching and assessment across its programs, around which faculty or central initiatives could be framed. The very high proportion of teaching undertaken by casual staff has exacerbated the impact of this diversity. Approximately 75 per cent of teaching is undertaken by casual staff, ranging from regular tutorials to the convening of entire courses, making it considerably more challenging to ensure that new policies, procedures, training and support engage all teaching staff. In addition to this mix of issues, COFA’s international students, although only around 15 per cent and below the university average, have had more impact than might be expected, with their diversity of backgrounds and lack of preparedness for academic study in the arts related disciplines. It is well known that Australian schools of art and design attract a very broad range of students, both domestic and international, some of whom have less (or different) experience of academic study, including required levels of competence in academic reading and writing. For some in our disciplines, the characteristics of an applicant that are most important are visual and aesthetic sensibilities, rather than highlevel academic skills and capabilities. It was in this context that the university’s decision to launch an institution-wide Assessment Project, to improve both the effectiveness and efficiency of assessment, crystallised a number of key

148 Part II

ImprovingAssessmentText2Proof.indd 148

11/11/13 3:36 PM

issues around learning and teaching at COFA, which had remained largely unaddressed. It also provided an institutional frame around the increasingly significant issue of assessment in the creative and studio disciplines (Ellmers, 2006), and created the opportunity for faculty-wide conversations on the role of assessment in learning and teaching in art and design. Within COFA as well, there had been an increasing focus on learning and teaching issues and a genuine desire for change and improvement. Perhaps the strongest manifestation of this focus was the faculty’s central engagement with the Australian Learning and Teaching Council (ALTC) funded Studio Teaching Project, which was jointly led by the Associate Deans (Education) from COFA and the Faculty of the Built Environment at UNSW, with investigators from the University of Queensland, RMIT University and the University of Tasmania. The four-volume report (Zehner et al., 2009) addressed assessment in studio teaching and, in part, drew upon case studies and research undertaken at COFA and UNSW. The COFA Assessment Project is a story told in two parts. From 2010 to 2011, COFA, along with many of the other faculties, embarked on a process of gaining an overview of what could be called ‘the landscape of assessment’ in the faculty, to clarify the actual goals and tasks needed to address the generic goals of effectiveness and efficiency. From 2011 to 2012 (and ongoing), this process intersected with the most significant curriculum renewal project ever undertaken in the faculty. The challenge and opportunity to rebuild our programs in art, design and media from the ground up, to engage the staff in reimagining education in the creative arts, and developing an entire suite of new courses and new approaches to learning, meant that the Assessment Project had a serious competitor for time and resources but, at the same time, an extraordinary opportunity to be woven into the very fabric of our degrees and teaching. The next section of this chapter provides more detail on each of the parts to the COFA story of change, development and quality improvement.

The COFA experience 149

ImprovingAssessmentText2Proof.indd 149

11/11/13 3:36 PM

Auditing the assessment landscape Fundamental to implementing the assessment initiative at the faculty level was understanding the ways in which assessment in the faculty was deemed to be ineffective, in particular, the ways in which assessment in the art and design studio could negatively affect learning and morale in the studio, or fail to provide adequate opportunities for students to respond to assessment as feedback. We also needed to understand the ways in which assessment in the faculty could be deemed to be inefficient (e.g. whether there were too many assessment tasks or too few high-risk assessments). This would require a finer-grained analysis of the actual assessment practices and culture in the faculty and would allow the development of goals for the project, which directly engaged with COFA’s assessment context. As a consequence, COFA undertook a systematic audit of assessment practices in its undergraduate core courses. The concept of undertaking an audit, and the overall design, had been the initiative of Professor Jan Orrell, the external consultant to the UNSW project. Clearly, there were many ways of capturing data around assessment, including reviewing alignment to learning outcomes, or surveying staff and student perceptions. Professor Orrell’s model was restricted to capturing data on the types of assessment tasks undertaken, the timing of those tasks, their weightings and the number per course (volume of tasks). In addition, it asked for data on the ‘authenticity’ of assessment tasks and the use of formative assessment. COFA saw this model of auditing courses as providing an opportunity to explore and present an overview that was often difficult to discern, as assessment tended to be viewed as a ‘private’ activity (often hidden in individual course outlines or class handouts), rather than a ‘public’ activity, with consequences for the institution and for students. The Associate Dean’s Office employed a casual staff member to audit the course outlines of all undergraduate

150 Part II

ImprovingAssessmentText2Proof.indd 150

11/11/13 3:36 PM

core courses available through the online handbook and track key features of each course’s assessment profile. This process of auditing course outlines (some of which were often unclear) was the first hurdle to be overcome and also clarified some of the first aims of the project. Although COFA had actively developed and distributed a faculty course outline template, it became clear, in attempting to capture the audit data, that some courses had missing, confusing or even contradictory information on assessment. The project needed to address the issue of increasing the clarity and quality of information. It was found that often assessment details were distributed only in class, making public or even broader faculty access very difficult. The assessment categories employed were also not necessarily straightforward. In the COFA context, the concept of ‘authentic assessment’ is less distinctive, as, overwhelmingly, students have always been required to develop outcomes that are closely aligned to real world, professional contexts, while exams and multiple-choice quizzes are almost unknown. In this context, assessment is defined as ‘… requiring students to use the same competencies, or combinations of knowledge, skills and attitudes that they need to apply in the criterion situation in professional life …’ (Gulikers, Bastiaens and Kirschner, 2005, p.69). COFA also added in the category of ‘holistic tasks’ to note whether an assessment task was integrated into a whole-of-course project. The data was eventually collated into spreadsheets for every undergraduate program, with graphs for the key parameters of: ʶʶ the proportion of each assessment category across the whole program ʶʶ the proportion of each assessment category by year of study ʶʶ the number of assessment tasks by year of study and averaged across the whole program. These particular parameters were chosen on the basis of their capacity to provide the most useful overview of assessment in a

The COFA experience 151

ImprovingAssessmentText2Proof.indd 151

11/11/13 3:36 PM

program, in terms of simple measures of effectiveness and efficiency. Effectiveness was examined in terms of whether assessment used an appropriate range of tasks that can be shown to measure the attributes and outcomes that we value in our students and whether there is a developmental structuring in assessment to reflect learning stages. Efficiency was examined in terms of the number of tasks used to generate a grade. These were necessarily crude indicators and were not used on their own to provide an overview of assessment quality. Professor Orrell, together with a staff member from the UNSW Learning and Teaching Unit, analysed the data in terms of the following questions, and the findings were used to generate reports on the efficiency and effectiveness of each program’s assessment practices: 1 What is the range of assessment tasks in the College of Fine Arts courses and how many assessment tasks are there in each course? 2 What does this range best achieve in terms of learning outcomes and are there gaps in opportunities for developing particular skills and capabilities? 3 When is the first assignment handed in? Does this give students the opportunity for feedback so that they can improve future performances? 4 Does the complexity of assessment demands progressively increase across the (year) levels of the program? 5 Are there sufficient authentic assessment tasks and is there evidence of over assessment? 6 Are formative assessment activities represented? 7 Does assessment appear to be holistically integrated? 8 Does the design give some groups of students an advantage at the expense of others? 9 What seem to be exemplary practices that might provide a model for other courses? 10 What areas require further investigation?

152 Part II

ImprovingAssessmentText2Proof.indd 152

11/11/13 3:36 PM

The reports were revealing. They showed that assessment tasks for courses in most COFA programs included a diverse range of formats, which offered the potential for developing of a range of graduate attributes appropriate for a faculty such as COFA. Not surprisingly, the reports revealed that assessment focused around the production of studio work, whether art, design or media. What was defined as ‘studio activity’ or ‘final studio project’ made up more than a third (36 per cent) of all assessment in the faculty, while ‘exams’ made up less than 3 per cent. In many ways, the challenge of working within a project framed by a broader university notion of assessment, often heavily weighted by exams and essays, was encapsulated here. When combined with ‘project proposal’, 50 per cent of all assessment in COFA dealt with the development and execution of creative work, with its inherent complexity, the importance of process and the personal, and the lack of templates or models of ‘correct’ answers. These figures highlighted that the challenge was always going to be to remain authentic to a model of assessment embedded in creative practice and thus without the familiar templates for what ‘effective’ and ‘efficient’ university assessment could be. Other assessment types most frequently used were: ‘extended writing’ (14 per cent), ‘short writing’ (14 per cent), ‘class presentation’ (9 per cent) and ‘journals’ (5 per cent). Perhaps somewhat surprisingly, however, ‘group work’ and ‘online discussion’ were both less than 3 per cent. Figure 7.1 provides insights into the distributions of assessment types across the undergraduate programs at COFA. In relation to the range of tasks, the reports gave rise to several concerns. They highlighted a lack of clarity about just how the larger studio tasks (often due in week 13) were broken down into smaller tasks, and how they were connected or constructed to build progressively on previous levels or types of assessment. Later year courses often heavily relied on assessing a single portfolio of major creative work, in many ways paralleling the ‘art prize’ model, where the final product alone is assessed. Yet this model not only

The COFA experience 153

ImprovingAssessmentText2Proof.indd 153

11/11/13 3:36 PM

Figure 7.1  All COFA undergraduate programs – assessment type distribution Online discussion 0%

Regular quiz/test 1%

Class prep. & participation 3% Exam 3%

Group work 2%

Journals 5%

Class presentation 9%

Studio activity 12% Extended writing 14%

Final studio project (portfolio) 24%

Shorter writing 14%

Project development/ proposal 13%

truncates the scope of assessment (de la Harpe et al., 2009), it can fail to provide effective and actionable feedback to students. The reports highlighted, in general, a lack of clarity on strategies for incorporating formative feedback, or how students could act on feedback to enhance their learning. It was also unclear just how the tasks contributed to the development of graduate attributes (i.e. the synergy between learning goals, assessment and learning outcomes). The reports identified limitations to the spread of tasks, with a relatively small proportion of writing tasks, both short and long formats, in the studio-based disciplines, along with a small proportion of group work and online collaboration, as already noted. They also acknowledged that there may be embedded activ-

154 Part II

ImprovingAssessmentText2Proof.indd 154

11/11/13 3:36 PM

Figure 7.2  All COFA undergraduate programs – assessment number by year of study 7.0 6.0 5.0 4.0 3.0 2.0 1.0 0

Year 1

Year 2

Year 3

Year 4

All

ities providing richness of experience and feedback that were not apparent in course outlines. Another concern that emerged from the reports flowed from the relatively low utilisation of current educational or social technologies, with very few courses indicating any use of online discussion or other technologies. While it is very likely that this is a tacit expectation that students are likely to be familiar with, and using, such technologies informally, the reports drew attention to the need for explicit inclusion and assessment of technology-mediated activities and emphasised the importance of developing digital literacy. The reports went on to note that, to enable an increasingly diverse body of students to develop the graduate capabilities required for an effective professional life, staff must also be enabled to effectively use and model the use of emerging technologies. The reports noted that the issue of engagement with technologies became more urgent as new hardware and software was distributed to students in secondary schools and new online tools became embedded in social contexts.

The COFA experience 155

ImprovingAssessmentText2Proof.indd 155

11/11/13 3:36 PM

On the question of how many pieces of work were required and whether there was over assessment, the reports noted that there was a wide range in the number of assessments in individual courses across programs. In some courses there were very few tasks and the availability of formative assessment tasks was not apparent. In others, there were large numbers of tasks, which brought into question just how assessment tasks were integrated and the efficiency of the process for both teachers and students. The audit showed an appropriate reduction in the number of tasks from years 1 to 3, albeit from a high starting point; but it also showed that, across the two four-year programs, there was a substantial jump in the number of tasks when a smaller number of integrated, holistic tasks would be expected, as indicated in Figure 7.2. The assessment reports were widely disseminated among the Heads of School and Program Directors, to raise awareness of the reports’ analysis and recommendations, start the process of building collaboration across the faculty, change existing practices that did not constitute good practice and introduce a culture of quality in assessment. Professor Orrell, together with a staff member from the LTU and the COFA team, held a number of workshops for all teaching staff to discuss the reports and recommendations for the way forward.

Audit outcomes: Clarifying goals The goals for improvement in assessment, developed in the workshops and using the assessment audit reports, addressed the following areas: ʶʶ number of tasks: »» reduce assessment activities in courses with too many tasks (e.g. more than three or four) to manage over assessment »» design fewer, more comprehensive assessment tasks aligned to course purpose within the degree »» incorporate atomistic or procedural steps as assessment

156 Part II

ImprovingAssessmentText2Proof.indd 156

11/11/13 3:36 PM

criteria or as elements in more holistic tasks ʶʶ feedback: »» devise meaningful and explicit formative activities throughout the semester, requiring active student and staff response and feedback – particularly where major studio tasks occur late in the semester »» develop timely and relevant learning and teaching strategies to ensure students read and act on feedback provided to them ʶʶ group work: »» include assessable group work at all levels »» provide clear guidelines for group work to ensure that students know what is expected, develop (constructive) group learning (and evaluation) skills, and understand how individual contribution and performance are graded within group projects ʶʶ task design: »» specify clear assessment criteria per task, to govern participation and marking, or delete the task »» focus assessment on higher order reasoning (cognate) tasks with real world application »» review and adjust whole-of-program and course assessment requirements to ensure consistent and incremental development of professional knowledge »» consider portfolio assessment of final year capstone courses »» be more selective in aligning assessment tasks with specific graduate attributes. The audit had proved invaluable in highlighting the configurations of assessment in COFA, which had previously been effectively hidden at the course level, thus preventing a broader understanding or intervention. The initial response from the Associate Dean’s office, which had carriage of the project, was to focus

The COFA experience 157

ImprovingAssessmentText2Proof.indd 157

11/11/13 3:36 PM

on apparently small but ultimately significant changes that would build a new culture of assessment in the faculty. The first step was to shift the approach to learning, teaching and assessment from a private to a more public activity, especially for those academics who deemed assessment to be an area for their professional judgment alone. Rather than immediately leap into broader questions around the adequacy of their assessment practices in the creative arts, our judgment was that an enhancement of processes would be an important first step in supporting teaching staff to see their assessment practice as part of a public process of quality enhancement. Existing processes were reviewed or new processes introduced in the following areas: 1 A redeveloped and revamped course outline template was introduced, with an increased focus on assessment, including criteria and rubrics. It became a major tool in supporting reform in processes and ensured that teaching staff had a clear opportunity to unpack the information that was often ‘unsaid’ and assumed in practice. This was supported by workshops with staff, visits to school meetings by staff from the Associate Dean’s office and email communication on the role of the course outline template in structuring information on all aspects of a course, including assessment. For each summative task, the overwhelming majority of course outlines now have information on assessment criteria, a marking rubric, a link to learning outcomes and graduate attributes assessed, and a feedback strategy. 2 A limit on the number of summative assessment tasks was also chosen as a first step to achieve an initial and unambiguous enhancement of efficiency. The faculty agreed that, from S1, 2011, the number of assessment tasks would be limited to three to four tasks per course, with staff encouraged to increase the use of integrated tasks that link a number of course and graduate outcomes and to more effectively use formative tasks.

158 Part II

ImprovingAssessmentText2Proof.indd 158

11/11/13 3:36 PM

»» A s the assessment report had noted, courses with a larger number of summative tasks often required lengthy feedback to be provided, which was over-burdening the teaching staff and failing to require students to take on a broader range of self-regulatory responsibilities. Staff were supported by workshops and one-on-one consultations to gain skills in integrating smaller tasks into larger projects with appropriate learning outcomes and in providing timely feedback for formative assessment purposes in a way that did not over burden them. The national Studio Teaching Project found that a holistic approach to assessment, where tasks are connected to larger whole and a range of dimensions are incorporated (product, process, person), is important for studio-based disciplines in the creative arts. »» The follow-up audit undertaken in 2011 showed that the average number of tasks had reduced from 4.6 in 2010 to 3.4 in 2011, a 26 per cent reduction in assessment tasks. Although the faculty is not in a position to quantify the reduction in staff (and student) workloads, anecdotal evidence suggests that task workloads have not increased substantially and thus the reduction in the number of tasks has decreased total assessment workloads across the faculty. 3 A trial of ReView, an online criteria-based assessment was begun. ReView is an online assessment tool that was initially developed in the context of design education and had features that could benefit assessment processes in the COFA programs, especially since a number of the recommendations from the audit report aligned very closely with features in ReView. These included increasing the use of ICT in assessment, supporting the reliability and timeliness of staff feedback, engaging students in self assessment, explicitly linking assessment to course and program outcomes and reducing assessment workload for staff.

The COFA experience 159

ImprovingAssessmentText2Proof.indd 159

11/11/13 3:36 PM

»» ReView offers a fairly simple online interface that can be used on a laptop or iPad. It is used to set up online marking of assessments and provides clear delineation of the ways in which each task addresses course and program goals, providing for the weighting of those goals and the use of assessment rubrics. Students are able to log in and see the assessment criteria for tasks and how they are weighted, enabling them to better understand the criteria governing the task they are undertaking. Students are also able to self-assess, the results of which can be used by staff to provide more targeted feedback on student perceptions and performance. The ReView system automatically generates aggregate marks and grades on the basis of the previously determined weightings, thereby making administrative calculations easier and more efficient. »» ReView has shown remarkable promise in terms of potential to transform assessment across the faculty. In particular, it provides a clear mechanism for linking assessment tasks to criteria and criteria to outcomes, clarifying for both teachers and students the connections between assessment and overall goals and the weighting and relative significance of each criteria. Using ReView to set up tasks has meant that teaching staff are clear on criteria and goals from the inception of the task. Moreover, marking is by weighted criteria in ReView, thereby limiting the easy recourse to ‘overall impression’ marking; markers are required to clearly distinguish performance across the stated criteria instead. Feedback has thus significantly improved, simply because students now receive clear feedback on their performance for each criterion and against overall program goals. There are exciting future opportunities for students to gain longitudinal data on their performance across a degree on key parameters. In a 2012 survey, 67 per cent of students

160 Part II

ImprovingAssessmentText2Proof.indd 160

11/11/13 3:36 PM

who used the self-assessment tool said it clarified their expectations of assessment. »» More than 80 per cent of staff using ReView in the trial indicated that they would like to use it again and significantly for the project, 60 per cent said that turnaround time for assessment was speeded up; and only 10 per cent indicated neither a change in turnaround time nor an increase in efficiency. Where ReView had had less of an impact was in its use as a moderation tool (with only 30 per cent indicating it was of value in moderation and 40 per cent saying it could have been but was not used), and the actual use by students of the self-assessment tool (with 60 per cent of students not self-assessing). 4 Support for the development of academic literacy was provided to help reduce the costly burden of plagiarism and increase marking efficiency in written assignments. Although less often seen in the context of assessment reform, poor student academic literacy skills can have consequences in terms of marking time, student failure and complex plagiarism procedures. Students with high capacity in art and design can sometimes be weaker in areas of academic reading and writing. As a consequence, the Associate Dean’s office instituted the testing of all incoming students on their academic literacy skills using the MASUS protocol (Bonanno and Jones, 2007). The outcome has been the provision of detailed feedback to new students in their first few weeks of study, with recommendations as to where they can improve their skills. Furthermore, the Associate Dean’s office developed two new courses, specifically designed to support the academic literacy of students in the creative arts at both undergraduate and postgraduate levels. These strategies (both the longer term impact of the testing and the courses to assess their impact) are being followed up in relation to a drop in the incidence of plagiarism and improvements in the

The COFA experience 161

ImprovingAssessmentText2Proof.indd 161

11/11/13 3:36 PM

perceptions of staff and students as to students’ reading and writing confidence and capacity. 5 Staff development through workshops and seminars was used to support sessional and continuing staff to deepen their understanding of assessment as a key driver of learning. More than 75 staff attended workshops focusing on how to structurally align assessment in course outlines, how to link assessment criteria and the use of marking rubrics, and how to provide feedback involving self and peer activities and assessment.

The Assessment Project and program renewal Discussion thus far has concentrated on the processes and outcomes of the COFA Assessment Project as it was developed from 2010. However, it is impossible to tell the story of this project without considering its intersection with another university project, Program Simplification. The one broad recommendation of the Assessment Report yet to be completed was the development of a whole-of-program assessment plan for each degree, to support the integration of assessment into curriculum development. Whole-of-program assessment plans promised the opportunity to align assessment strategies with the scaffolding of learning outcomes across each stage of every program. This, in turn, would support the consistent and incremental development of professional skills across degrees, the use of capstone portfolios to support integrated professional outcomes and the embedding of graduate capabilities through alignment with tasks. The opportunity to move to whole-of-program plans to allow this critical embedding of assessment came about through the coincidence of the Assessment Project with the Program Simplification Project, also sponsored by the university. COFA had undertaken a number of program reviews over the period 2009–11 and used the opportunity of the broader project

162 Part II

ImprovingAssessmentText2Proof.indd 162

11/11/13 3:36 PM

to simplify program structures to undertake the most significant renewal of its degrees in art, design and media ever undertaken. The process of renewing its studio degrees, recasting program goals, reviewing each year of study and rewriting each course, required the faculty to also review its assessment practices, given that assessment is central to learning, with a profound impact on what and how students learn. The consequence was to re-energise the Assessment Project, but also extend it beyond its initial three years to the longer rollout of program renewal, which will be completed in 2016. This context provided an ideal opportunity to introduce strategies for developing effectiveness in assessment (with a focus on meaningful assessment as learning) and efficiency in assessment (with a focus on quality rather than quantity, to reduce staff and student workloads). In 2012, the Associate Dean’s office organised two workshops (of two days each) for all staff involved in first year, as well as a lengthy series of meetings and workshops to undertake the writing of the new first year courses in the Bachelor of Fine Arts, Bachelor of Media Arts and Bachelor of Design. The goal was to drive substantial change in these critical courses. Teams of staff in each school, and at the faculty level through the First Year Coordinator’s Group, undertook the course development process, supported by the DLT and ADE. Best practice learning and teaching strategies have been included in the course outlines and include streamlined assessment with three summative tasks, pertinent assessment criteria, marking rubrics and briefs aligned to the Learning outcomes using a standards-based assessment matrix for evaluating student achievement of graduate capabilities. Staff also addressed the need for better correlation between assessment timing and content across concurrent courses and an integrated approach to grading using a standards-based assessment matrix. This tool references a differentiated range of graded and ungraded descriptors of student performance which reflect the range of ‘developing’, ‘functional’, ‘proficient’,

The COFA experience 163

ImprovingAssessmentText2Proof.indd 163

11/11/13 3:36 PM

‘advanced’ (and ‘higher order’) levels of achievement recommended by the Assessment Project consultant. In 2013 COFA is developing its major streams that will make up the core of the new programs in Stages 2 and 3. More than 120 new courses have been developed, using a team approach and templates, each of which has addressed the same model for high quality and efficient assessment, as was modelled in first year courses. The process will be completed in 2014–15, with the development of the fourth year course. One of the most significant issues for this final component will be addressing assessment in an honours context.

Conclusion: Sustainable achievements The UNSW Assessment Project played an important role in setting the context for COFA’s approach to program renewal and simplification. It also provided critical content, ensuring that individual courses engaged with assessment as central to their design and that overall programs were now structured around the staged development of key learning outcomes and graduate attributes. This will undoubtedly be the major and lasting outcome of the Assessment Project at COFA. The more process-oriented reforms have also had a lasting impact in terms of driving further changes in course design and teaching. Improved standards for course design, development and delivery are continually being incorporated into COFA course outline templates and associated guidelines. These tools are circulated to academic staff and reinforced in collegiate faculty meetings and professional development workshops run throughout the year. As a consequence, there has been a process of continuous improvement in the quality of course outline templates for individual courses and a more systemic approach to updating and making them available on the COFA website. Beyond the achievements in assessment practices and in

164 Part II

ImprovingAssessmentText2Proof.indd 164

11/11/13 3:36 PM

curriculum design, there has been a lesson learnt in terms of the ways in which academic staff can be engaged and empowered to work with contemporary learning and teaching approaches. More clearly articulated goals, student focused learning, holistically integrated assessment and well formulated learning outcomes have all gradually become features that have improved quality in the COFA programs, moving also from theory to practice, from words to actions. By transforming processes and scaffolding course and program development, COFA’s committed academic teaching staff are transforming assessment in the creative arts and modelling the engaged practitioner in a continuous and iterative process of transformation and improvement.

References Bonanno, H & Jones, J (2007) The MASUS Procedure: Measure the academic skills of university students, a diagnostic assessment, University of Sydney, Sydney. de la Harpe, B, Peterson, JF, Frankham, N, Zehner, R, Neale, D, Musgrave, E & McDermott, R (2009) ‘Assessment Focus in Studio: What is most prominent in architecture, art and design?’, The International Journal of Art and Design Education, 28(1): 37–51. Ellmers, G (2006) Assessment Practice in the Creative Arts: Developing a standardised assessment framework, Teaching and Learning Scholars Report, Faculty of Creative Arts, University of Wollongong, Wollongong. Gulikers, J, Bastiaens, T & Kirschner, P (2005) ‘Perceptions of Authentic Assessment and the Impact on Student Learning’, Paper presented at The First International Conference on Enhancing Teaching and Learning Through Assessment, Hong Kong Polytechnic University, Hong Kong, June. Studio Teaching Project: see . [Accessed September 2013.] Zehner, R, Forsyth, G, de la Harpe, B, Peterson, JF, Musgrave, E, Neale, D & Frankham, N (2009) Curriculum Development in Studio Teaching, Australian Learning and Teaching Council (ALTC), Sydney, 1–4.

The COFA experience 165

ImprovingAssessmentText2Proof.indd 165

11/11/13 3:36 PM

8

The Faculty of Engineering: Beyond professional accreditation David Clements

The Faculty of Engineering was one of the three founding UNSW faculties; the other two being the Faculty of Science and the Faculty of Technology, later renamed the Faculty of Applied Science. In 1997 the Faculty of Applied Science was disestablished. Each of the schools within that faculty made a choice as to whether they would transfer to Science or to Engineering. Chemical Engineering moved across to Engineering but Materials Science and Engineering went to Science, so that one Engineering school still sits anomalously within the Faculty of Science. Food Science and Technology decided on Science initially but has since crossed to Engineering. Some of the other schools, like Wool and Textile Technology, disappeared altogether. Engineering initially had three schools, namely Electrical Engineering, Mining Engineering and Chemical Engineering. Now there are ten: Biomedical Engineering, Chemical Engineering, Civil and Environmental Engineering, Computer Science and Engineering, Electrical Engineering and Telecommunications, Mechanical and Manufacturing Engineering, Mining Engineering, Petroleum Engineering, Photovoltaic and Renewable

166

ImprovingAssessmentText2Proof.indd 166

11/11/13 3:36 PM

Energy Engineering, and Surveying and Geospatial Engineering. Engineering is the second-largest faculty in the university, with 691 staff, including 247 teaching and research staff and 177 research-only staff. Nearly 10,000 students are enrolled in our programs, including 6850 undergraduate students, 2020 postgraduate coursework students and 880 PhD students. Only 20 per cent of the students in engineering are female, including 1200 undergraduates. A recent study of Civil Engineering students could not identify any significant difference, on any dimension, between the female and the male engineering students in terms of the way they learnt or preferred to learn. The challenge for engineering is to broaden its image, broaden the style of teaching that students perceive to be offered, and broaden the style of job people perceive to be an engineering job. If we can do this, we are likely to attract more women and indeed men with a broader range of skill sets and views of the world than our students currently have. The faculty offers the broadest variety of engineering programs of any Australian university: 22 different versions of engineering, as well as two science degrees (Computer Science and Food Science and Technology).

Faculty governance and management Engineering is still the most devolved faculty in the university, with a light hand oversight of the individual schools. Students identify very strongly with their school and most of their interests are related to the school rather than to the faculty. There is a strong correlation between staff and student interests at the school level. Staff in the individual schools have tended to reinforce the school identity and to emphasise the differences rather than the similarities between different engineering disciplines. In recent years UNSW engineering has begun to move away from the devolved model. For example, students can enrol in a

The Faculty of Engineering 167

ImprovingAssessmentText2Proof.indd 167

11/11/13 3:36 PM

flexible first year program rather than commit from the start to a particular discipline. We have needed to decide what should be centralised and what should not. In the engineering faculty we have three Associate Deans, namely: ʶʶ the Associate Dean (International), who is concerned mostly with marketing the university to international students ʶʶ the Associate Dean (Research), which is a half-time position responsible for staff research, research grants and higher degree research students ʶʶ the Associate Dean (Academic), everything academic that is not taken up by other Associate Deans falls to this position. The ADA in engineering is not equivalent to the ADEs in other faculties, the role in engineering is much broader. The current ADA serves as Acting Dean when the Dean is away. The portfolio ranges from admissions processes, through program proposals to graduation ceremonies. The ADA is the program authority for all cross-faculty programs and courses. The ADA is also program authority for the Engineering and Technical Management specialisation within the Master of Engineering Science. Faculty programs such as combined engineering and commerce are developed by the schools but are owned by the faculty, so the ADA is also responsible for them, as well as for faculty programs like the university preparation programs and the bridging program in Science and Engineering Technology. The ADA is also responsible for the flexible first year program, a faculty-owned course that typically has 150–180 students enrolled. The ADA is the academic adviser for all the flexible first year students, as well as all the faculty programs’ academic advising and support, conducting in total up to 300 advising sessions a year. The ADA also looks after the faculty’s admissions scheme, which involves interviewing 400 prospective students each year. The centralisation of governance that has evolved with programs such as

168 Part II

ImprovingAssessmentText2Proof.indd 168

11/11/13 3:36 PM

the flexible first year has not been accompanied by a movement of resources from schools to the office of the ADA. Schools continue to have major roles in the organisational support for learning and teaching. Each school has its own learning and teaching committee, which approves courses and programs before they are submitted to the faculty’s Education Committee. Each school also has an assessment review group, which is responsible for overseeing assessment. In practice, schools tend to see their accountability lying with Engineers Australia. The faculty is accredited every five years. Accreditation is by individual program and not by faculty. Because programs are owned by schools, academics tend to see the process as school based. For example, the School of Civil Engineering has three base programs: Civil Engineering, Environmental Engineering and Civil Engineering with Architecture. Academics in the faculty respect the accreditation process but they do tend to presume that it constitutes sufficient learning and teaching quality assurance in itself for the whole faculty and it can be difficult to convince them that there is a need for any further measures to be taken. This can be problematic when it comes to initiating change in learning and teaching. Many staff fail to understand why the processes Engineers Australia suggests for quality assurance would be instituted beyond the context of the accreditation process. Engineers can be conservative when it comes to learning and teaching. They tend to assume that the way they were themselves taught is the only way to teach. Their teaching is very methodical, focused, narrow, predictable, even inflexible. There are positives and negatives to this.

The predominant teaching model In the Faculty of Engineering, the standard teaching components are three lectures, one hour-long tutorial and one-and-a-half hours in the lab per week for each of the four courses that comprise the

The Faculty of Engineering 169

ImprovingAssessmentText2Proof.indd 169

11/11/13 3:36 PM

full semester load. The teaching model is stand-and-deliver in a lecture theatre; in the tutorials, it is by example or demonstration: ‘This is a standard problem, this is the standard way of thinking, this is how you solve the problem’. Consistency of outcome in engineering projects is very important. The first rule of building a bridge is that it does not fall down, so if this is achieved then 80 per cent of intended learning outcomes are satisfied. This kind of thinking can make a fairly prescriptive way of doing things seem very attractive. The degree of prescriptiveness varies throughout the faculty. In Civil Engineering the teaching is very prescriptive but some of the newer engineering disciplines like Software or Photovoltaics are more open to ideas, because, using the example of Photovoltaics, a different type of person is attracted to sustainability to one who is drawn to building bridges. To be interested in sustainability, students have to be interested in people, in policy, in politics; they have technical interests too, and usually an altruistic side to their personality. On the other hand, the perception is that someone can be a great structural engineer if their maths and technical knowledge are very strong. The environment they work in will be much more limited but they will be very effective within that environment. Feedback from engineering teachers to students

Historically, engineering teachers used to provide detailed feedback. When I was at university in the late sixties and early seventies, only 100 people were doing engineering. In the very first week we were given a written assignment to complete – something like 1000 words on the crystalline structure of materials, using ten references from the library. When we handed it in, there must have been four or five people marking all weekend to return it to us on the Monday. The returned assignment was absolutely covered in red ink, it must have taken at least an hour and a half to mark it. We received intensive, useful feedback and what seemed to be a universal message, ‘You need to resubmit by next Friday’. Our

170 Part II

ImprovingAssessmentText2Proof.indd 170

11/11/13 3:36 PM

resubmitted assignments came back the following Monday, still with copious red ink, but this time with a mark. From that exercise we worked out what was required, what counted and what did not count. These days diminishing resources make that kind of approach impossible. Resources are a big issue at multiple levels. Engineering has not re-thought teaching sufficiently, or incorporated new ways of teaching sufficiently, to obtain the effects that were once so easily obtained. Resources have diminished at a certain rate and people’s views of teaching and their ability to cope with change in their teaching have not expanded to fill the gap at quite the same rate. They have expanded to some degree but not as much as they need to. It is always hard to persuade people to change and academics are notoriously resistant to such pressure. It is rapidly emerging as a priority that we need to do this. Smaller-scale resourcing issues affect us too. Lecture theatres, for one: new timetabling software that was used successfully in other institutions was almost defeated by the complexity of the venues on this campus. Some of our own decisions have led to resourcing problems, like our flexibility: if we offered less flexibility, we would have larger classes but fewer courses to manage. Assessment practices

Most assessment practices in engineering began as responses to accreditation visits, rather than from university initiatives. One of the first concerted actions in the faculty that had anything to do with assessment was about a decade previously when UNSW started to audit the quality of course outlines. Engineering has never had a culture of reviewing assessment practice in isolation. Assessment has generally only been considered in the assessment section of a course proposal and even there it has rarely been dealt with in depth. The implied logic for the faculty has been that assessment has never been a fixed component of a course design, because every time we taught a course we could

The Faculty of Engineering 171

ImprovingAssessmentText2Proof.indd 171

11/11/13 3:36 PM

vary the balance between this and that, or the style of assignment. Traditionally there has never been any expectation of consistency across the faculty. Engineers Australia introduced the notion of curriculum mapping and assessment mapping to their accreditation process. Curriculum mapping (they called it program design) appeared first, about ten years ago. By the time of the last accreditation round in 2011, Engineers Australia had brought the notion of assessment mapping to the fore. I should mention that prior to the UNSW Assessment Project, we gathered together the engineering faculties from 12 universities from around Australia and held a one-day workshop on curriculum mapping and implementation of curriculum mapping. So curriculum mapping was already on our radar and responding to Engineers Australia’s new focus seemed like a natural direction for us to go in.

Program assessment mapping Synergies and drivers

Because the UNSW Assessment Project was occurring at around the time that Engineers Australia were starting to show an interest in assessment mapping, the first thing we focused on, in response to the VC’s announcement, was creating assessment maps for all our engineering programs. We saw that the assessment exercise would help us with the accreditation process, so it was the obvious thing for us to look at first. We thought we had a lot of work to do. But when the university advisor, Professor Jan Orrell, came and looked at Mechanical Engineering in detail, she was pleasantly surprised at the level and spread of assessment in the School of Engineering. There were some gaps, of course, but I think she was expecting to get a shock and she did not. That was good news for us. At that time, one of my jobs as ADA was to manage the

172 Part II

ImprovingAssessmentText2Proof.indd 172

11/11/13 3:36 PM

accreditation process for the faculty, which took a year. I had some support but I had to drive the Assessment Project as well. I had one person helping me, Meredith Lowe, but only as part of a full-time job, from which we allocated some of her time. A lot of the work was done in the schools but Meredith and I managed and scheduled the whole project. Goals and process

The stated goals of the program assessment mapping project were to assess the range of robust and reliable assessment methods used within courses and across programs and to identify some improvements that might be made to assessment activities, both within and across programs. A related goal was to identify courses being over assessed. How did the faculty decide what to do? Well, I got to know the faculty really well and worked out what was feasible, what I could impose. I know the people in the faculty who are innovators and technologically savvy. A few things were happening at the time: AQF and TEQSA were coming on the scene, so was the notion of external scrutiny of programs. Our Dean had come from the English system of external examinations, he was well embedded and understood it well and saw value in it. The Go8 Quality Verification System was also happening. Our own history of being involved was important, as was our existing working party on Fourth-Year Thesis (see later). The Assessment Project was driven from the faculty level in such a way that we could meet Engineers Australia’s expectation that we would engage with their assessment focus. I integrated this focus into the document that every school had to prepare for the accreditation of their programs. Engineers Australia accreditation requires you to detail the environment within the school in which the programs are taught and also the quality assurance system under which the programs are taught. The report must cover the program itself, the school’s ability to teach it and the processes around it. This

The Faculty of Engineering 173

ImprovingAssessmentText2Proof.indd 173

11/11/13 3:36 PM

was done for all ten schools. On top of that, the faculty is required to write a document that situates the schools within the context of the faculty, describes what the faculty added, and situates the faculty within the context of the university. So we ended up with 11 documents, each quite substantial. Each school had to create a curriculum map and an assessment map for each of their programs. Like most other UNSW faculties, we started by building assessment maps for most of the undergraduate bachelor programs. This exercise varied in its rigour from program to program but mostly it showed that engineering programs generally had a reasonable range of assessment tasks. There was some over assessment but there were also gaps in the assessment maps in some programs. After some consideration, we decided to direct our efforts at assessment activities that might be improved at little or no cost by leveraging existing resources such as Moodle LMS tools and added plug-ins. Some lecturers were using these resources already but we saw that substantial gains could be made from their wider adoption. The first year of the Assessment Project was tightly integrated in the faculty with the way I was managing the accreditation process. Because of this, there was no initial resistance to the project. We did not at that stage see the assessment mapping exercises as a means of collecting data on the schools’ focuses in relation to the Assessment Project goals. For the Engineers Australia documents, the academics were asked to write a commentary on their assessment map. Some of them were reasonable commentaries, some of them were quite minimalist and did not show much insight. At the time in the university we were having some discussion about an IT system for curriculum mapping and I was thinking that the faculty might use the program maps, curriculum maps and spreadsheets generated by the project to build on the UNSW system. That system did not eventuate.

174 Part II

ImprovingAssessmentText2Proof.indd 174

11/11/13 3:36 PM

Response and outcomes

Assessment was not mapped uniformly well across the faculty. Without a data collection and management tool to manage the process (we thought about writing our own tool but in the end decided not to), what was done was a bit of a wasted effort, involving lots of work and lots of paper. I was reluctant to take the process any further without more support. Lack of such tools made it very difficult for staff to conduct any kind of systematic review and analysis of our position. Even a fairly basic tool would be better than nothing; then at least we would have the data stored in electronic form and could upload it into another tool to use in different ways. As to the outcomes of the assessment mapping, we felt that the schools we examined thoroughly were better than we had thought they would be. The reports ultimately did not inform our decisions about how to respond to Assessment Project expectations. By the time we obtained them we had become uncertain about what we were going to do about assessment. We could not see a way to make progress that we would not have to redo later; it looked like lot of work, and it would not be very easy to bring the results together. In order to make real progress, we needed staff to engage in assessment projects that they thought were worth addressing. We identified two at faculty level: the fourth year thesis and a Moodle assessment tool. In addition, eight of the 10 schools identified an assessment project that they would like to undertake. Each of these is discussed below.

The Thesis Assessment Project All our students in engineering have to complete a fourth year individual thesis and often at least one other Year 4 course is a course related to their research project. Fourth-Year Thesis is a major task and there has always been an issue with managing it, with consistency of marking and with consistency of outcomes. We

The Faculty of Engineering 175

ImprovingAssessmentText2Proof.indd 175

11/11/13 3:36 PM

participated in a major quality assurance project, the Quality Verification System (QVS), with the other seven Australian universities that are referred to as the Group of 8 (Go8). All the engineering faculties participated. Fourth year theses, which had been marked internally, were sent out for external marking. Academic staff from other universities marked theses of UNSW students and UNSW staff marked theses from other universities. There was a high correlation between the marks that had been awarded by UNSW staff and the external members. The QVS project stimulated further interest in the thesis. For example, it would be natural to allow students to undertake projects across different schools, or to complete group projects with different individual components within a larger project. However, because of different requirements in different schools, some of these things are difficult to implement. Every school thought it had special circumstances – they could not see, for example, how a Chemical Engineering thesis could have anything to do with a Software Engineering thesis. Each of the 10 schools in the faculty independently managed their thesis and project courses, supported by varying degrees of software and administrative support. There were many processes in place. Staff posted lists of projects, they bid for students and students signed up for projects. There were also processes for conducting presentations, for supervisors and assessors marking and submitting exam results, for signing off on occupational health and safety, for doing lab work, and any number of student milestones that have to be signed off during the course of the thesis year. Resources

In terms of the resources we had at our disposal, I managed to persuade the faculty to fund both an Educational Developer and an Educational Technologist. Unfortunately budgetary constraints were imposed. We managed to retain John Paul Posada as the technologist but lost the Educational Developer position. That meant

176 Part II

ImprovingAssessmentText2Proof.indd 176

11/11/13 3:36 PM

we focused our project efforts more on the technology and what it enabled. At the same time we had received a grant from the Department of Energy and Climate Change to develop energy efficiency courses, and as part of that we decided to develop them in blended mode. We hired Russell Waldron as an Educational Developer and two other people to assist. That was a project running separately but we were able to use some of their time on the thesis project. Goals and process

A faculty working party on the Fourth-Year Thesis, was set up, chaired by Dr Graeme Bushell from Chemical Engineering. Its goal was to create a Year 4 thesis assessment process across the faculty to improve assessment reliability and convergence throughout a population of assessors, both subject matter experts and external assessors. We expected this process to underpin the creation of larger, more complex, cross-disciplinary honours activities. We also planned to be prepared for some form of external examination process, either through TEQSA or some joint Go8 scheme. The working party gathered data on course outlines and course rules for all thesis courses in the faculty, identified what were the intended learning outcomes, identified what consistency there was and then drafted a proposal for a unified set of rules and outcomes that would satisfy all schools. Outcomes: The report

The working party produced a report on the feasibility of unifying the Year 4 thesis across the faculty in terms of an 80–90 per cent consistent set of intended learning outcomes, consistent and reliable marking schemes, and assurance of standards. Part of our motive was that these were good things to do. Partly we were trying to protect ourselves from pervasive external forces such as the TEQSA regime. Chemical Engineering identified several issues with the existing Fourth-Year Thesis system. These included:

The Faculty of Engineering 177

ImprovingAssessmentText2Proof.indd 177

11/11/13 3:36 PM

ʶʶ Lack of transparency in allocating supervisors. It was neither easy to see who was supervising students, nor even to establish whether there were students who had not been allocated a supervisor. ʶʶ Student allocation could only take place once a year, in S1. Allocation was carried out entirely by email, with students registering their enrolment and giving several preferences in an extremely time consuming and tedious process. Students who missed this allocation had to knock on academics’ doors to obtain an S2 or summer start to their project. With an increasing intake of mid-year students, who needed to start their thesis in S2, this fall-back position was unacceptable. ʶʶ It had become increasingly difficult to manage coursework students undertaking minor or major projects, in terms of both workloads and allocations. For some academics, most of their students missed the S1 allocation and it became unmanageable to match them all with their preferences. ʶʶ The Fourth-Year Thesis marking scheme also required a review, which had occurred both at Teaching and Learning Committee and academic staff meetings. The need for a solution to the administration of marking had been identified. Outcomes: The Moodle Fourth-Year Thesis course

Another positive that emerged from the working party was the Fourth-Year Thesis online course in Moodle. Dr Graeme Bushell, in leading the working party, saw many opportunities for his own school. Being Moodle-savvy, he wondered whether a ‘course’, run using the Moodle LMS, could support all of the functionality needed to manage the Year 4 thesis and even other project courses. As a personal project he decided to try implementing the recommendations and he used Moodle to create a quick prototype to manage Fourth-Year Thesis. I saw that it could become a key component of our Assessment Project across the faculty and encouraged him to continue with it and promote it as such. I talked to

178 Part II

ImprovingAssessmentText2Proof.indd 178

11/11/13 3:36 PM

the Dean and the Heads of Schools and we all agreed that it would be a good thing to try. Dr Bushell’s prototype, implemented in Moodle, uses a number of Moodle activities, including the workshop and the quiz. It has dramatically changed the way the school deals with the administration side of Fourth-Year Thesis enrolment, as well as the marking. The course enables the following actions: ʶʶ Academics each upload their projects and research areas into the system, so that students can make an informed choice of preferences. Students choose a supervisor, rather than a project. This leaves the supervisor some discretion in allocating projects according to students’ capabilities. ʶʶ The Head of school and the Director of Teaching can enter upper limits to student numbers for each academic, based on the school’s workload model. ʶʶ Fourth year and postgraduate students can self-enrol in the Moodle course when it is made available. There, using a Moodle quiz, they directly nominate a supervisor up to the limit determined by the school and indicate their first, second and third preferences. An academic’s name is greyed out if they have reached their intake limit. ʶʶ Information entered by the student in the Moodle quiz is easily downloaded for allocation. Once allocations are made and confirmed, students are given the opportunity to change their choices in a second quiz. ʶʶ Students upload an electronic copy of their thesis. Before Moodle will allow them to do this, the student must have handed in to the school office two copies of their thesis, risk assessment information, OH&S safety training documentation and a clean-up certificate. Administration staff tick off the lodgement of these items in Moodle. This is vastly easier and more efficient than the previous paper-based system. ʶʶ The policies of the school have been incorporated into a marking scheme within the course.

The Faculty of Engineering 179

ImprovingAssessmentText2Proof.indd 179

11/11/13 3:36 PM

ʶʶ End of semester processing has also now become simpler, with a student checklist provided for completion in Moodle. Chemical Engineering tried the course first, in 2011. If teachers wanted fourth year students and if they wanted to be involved in Fourth-Year Thesis, they could only gain the students and manage the thesis in Moodle – and, of course, students had to operate through the Moodle course, too. This meant that every Chemical Engineering student and staff member involved in Fourth-Year Thesis had to become familiar with Moodle to be involved – and they all wanted to be involved. The result was that the pickup of Moodle was significant and prompt within the whole school. Chemical Engineering continues to use the course for managing Fourth-Year Thesis and has reported major efficiency improvements. Their administrative staff also strongly support it. Mechanical Engineering became aware of the Moodle thesis course two years ago at our fortnightly meeting of all Heads of Schools, the three Associate Deans, the Dean and the Faculty Manager. The Head of the School of Chemical Engineering proposed putting up this project for a faculty teaching award because it had been such a great success and the other Heads of School assented. Then I mentioned that we could make this part of the Assessment Project, to which they also agreed. Because one school had had a good experience with it, the other schools took notice, so that when we approached them to try it themselves they were quite receptive. Mechanical Engineering was the next to volunteer to use it and after implementing it slightly differently from Chemical Engineering they have reported that they are happy with it. Previously they had had a bespoke system, hand written and programmed. It was a very good system but when they started to look at changing their requirements, hiring someone to re-program it had begun to look very expensive. In seeing that the thesis assessment course is integrated into different schools, I generally talk to at least one person within the

180 Part II

ImprovingAssessmentText2Proof.indd 180

11/11/13 3:36 PM

school and get them onside. Then I can approach the Head of School and point to someone in the school who is happy to try the new system. When the course’s success and its value to that first academic is demonstrated, other academics start to find it attractive. I do my best to get people to think it was their idea, which takes time. Once I feel I have got them to a certain level, I say, ‘Well, that sounds like a good idea. Let’s do that.’ That sense that they partly initiated it locks them in a bit more firmly. The schools of Mechanical and Manufacturing Engineering, Photovoltaics and Renewable Energy Engineering, and Electrical Engineering and Telecommunications are now in the process of adopting the course to their needs. I have given them flexibility in the way they are to implement it, because each school is a bit different and has a different history. Mining and Petroleum Engineering have also flagged their intention to adopt the system. Photovoltaics and Mining are both smaller schools with much less IT support, and they are working their way into it slowly. By the end of 2013 we will have every school using the course except Civil Engineering and Computer Science and Engineering (CSE). CSE are a special case; they have a handwritten system, to which they are very attached. It works well, they already do a really good job of managing the thesis, so they have little incentive to change. If Civil Engineering were to adopt the system, more or less the whole faculty will have taken it on. Civil Engineering is the biggest school and they are dealing with other priorities at present. At the end of 2013 we will get Chemical, Electrical and Mechanical Engineering together, assess what each school has done and see if we can develop a common framework that all three are happy with. That is the framework with which I will approach Civil Engineering, the one that has been refined and tested. Issues

A controversial issue arising from the thesis project was the design of the rubric. In Chemical Engineering we had many meetings

The Faculty of Engineering 181

ImprovingAssessmentText2Proof.indd 181

11/11/13 3:36 PM

about it, with a group of academic staff in charge of it. We had a lot of resistance to the institution of a rubric; staff tended to claim that they knew a good thesis when they saw one. We settled on a compromise: a rubric rolled out into a linear scale, with a description at each level about what each letter-grade meant. The academics felt happier with that. Mechanical Engineering are using a properly designed five-level rubric, which is the model we will try to persuade Chemical Engineering to implement.

Workshop tool assessment package The strongest component of the thesis project was the development of the Moodle workshop activity to improve its functionality to assist assessment. The development came about as follows. Engineering Design (ENG1000) has been quite an innovative course over the years and in that course students have used two tools, WebPA (an open-source package used for group assessment) and CPR (Calibrated Peer Review). They have also learnt to use Blackboard and we have always thought that it would be good if these systems could be put together for our Year 1 students, so that they had only one new system to deal with instead of three. Project initiation

John Paul Posada’s group and our casual programmer Morgan Harris looked at the situation. They decided that if we could get some CPR and some of the group functionality into the Moodle workshop activity, the increased value of the tool would be much greater than the amount of work required. In addition we would have a consistent tool that could do more than any of the three tools could accomplish by itself. We took the idea to Associate Professor Julian Cox, Associate Dean (Education) in the Faculty of Science. He was a former academic in the School of Chemical Engineering and has a strong Moodle background and a similar interest in both increasing the

182 Part II

ImprovingAssessmentText2Proof.indd 182

11/11/13 3:36 PM

take-up of Moodle and developing such a tool. He was regularly teaching a science course where it was required. We combined resources, with an agreement that Engineering would develop the tool, Science would test it and provide feedback, followed by an iterative process of reprogramming and retesting. Professor Cox had ideas about how to use different peer review methods, combinations and calibrations. But the main thing he had was a critical mass of students, medium to large classes of 100 to 200 students. Before this, in papers about the workshop tool and Moodle group work, none of classes had more than about 15 or 20 students. We needed to test whether it would be efficient for large classes, because if it were not, it would be of low utility for us. Package development

The largest project was to incorporate extra features into the workshop activity, namely: ʶʶ group functionality ʶʶ reviewer calibration ʶʶ reviewer assignment comma separated value (CSV) upload option. We have integrated these features in the Moodle workshop activity and written a plug-in for Moodle giving access to WebPA. This set of additions integrates and substantially increases the range of online support within Moodle for self- and peer assessment and for group related activities. We have also been working on a rubric-based marking application to simplify marking of poster sessions and design presentations by a large group of markers. The output is presented in a form, and we are working on automating its upload to Moodle. Our educational technologist John Paul Posada organised iRubric for us. All the products on the market worked to an extent but none of them were quite what we wanted. iRubric worked well for us because it is rubric based, which helped reinforce that idea –

The Faculty of Engineering 183

ImprovingAssessmentText2Proof.indd 183

11/11/13 3:36 PM

now people in the faculty are quite happy to mark to a rubric when they are doing presentations and such; we have had quite good take-up of that idea. Outcomes

Professor Cox has been using the assessment package with his large Science cohorts with great success. The package is also being used as the basis for conference publications, workshop publications and scholarship of teaching and learning publications. It turned out that the package had lots of synergies with other people using Moodle in other faculties and disciplines globally. It is interesting from the points of view of collaboration and impact on student learning but it also demonstrates the academic staff’s capacity for scholarship in the area. At the recent learning and teaching forum, we ran some sessions on the assessment package. We also presented it at a recent Science and Engineering Learning and Teaching Workshop and we are presenting a workshop at Moodlemoot. Moodle headquarters have accepted the package for integration into the next version of Moodle so that it is available for everybody worldwide. It is actually giving UNSW quite a strong Moodle visibility across the sector.

Individual school projects I felt that my implementation of the Assessment Project had perhaps been a little high-level, that there were not enough people individually involved in the project. So I invited every school to propose one project. I had almost no funding but I asked every school to do something in relation to assessment, whatever they thought was interesting. They could fit in with what we were doing at faculty level, or not, as they wished. Eight of the schools put forward proposals and seven have so far begun implementing their projects. This is much work in progress and it is too early to know what outcomes will be achieved. The projects are:

184 Part II

ImprovingAssessmentText2Proof.indd 184

11/11/13 3:36 PM

ʶʶ Graduate school of Biomedical Engineering (Socrates Dokos): Online and sequential quizzes to monitor student learning. A series of short quizzes containing some multiple choice questions that test and retest the same overall concept, as well as some open-ended questions. The quizzes will be completed online and an accompanying rubric developed. The questions will be slightly modified and grow with the class. This will allow the students to gauge where they are in the learning process and self-assess their learning throughout the course. ʶʶ Civil and Environmental Engineering, CVEN2101 – Engineering Construction (Leonhard Bernholdt): Using sketching as a learning tool. CVEN2101 focuses on key principles related to the safe and effective use of construction equipment such as trucks, excavators, cranes and temporary structures. With this project, students learn through a sketching and mapping activity. A sketching model that covers a variety of basic construction related topics such as cost, power, safety and technologies has been piloted and reviewed in the course. ʶʶ Surveying and Geoinformation Engineering (Craig Roberts): Formative assessment through an online quiz. Online questions offer students answer options to choose from. Feedback is provided on their choice of answer. This feedback aims to remediate the typical errors students make before the student proceeds to the next step of the problem; it catches student problems as they make the mistake. This addresses an issue with the old paper-based method, where errors would persist through to the final (incorrect) answer, leading to marking difficulties and feedback that was of little use after the fact. ʶʶ Photovoltaics and Renewable Energy Engineering, Life Cycle Analysis Course (Emily Mitchell): Peer marking of team presentation of a written report.

The Faculty of Engineering 185

ImprovingAssessmentText2Proof.indd 185

11/11/13 3:36 PM

ʶʶ Mechanical and Manufacturing Engineering, MECH3110 (Kana Kanapathipillai): Team-based assignments. This is an intensive engineering design course where students articulate a fully worked design across a series of assignments (T1–T7 inclusive) within the semester. With the exception of T2 and T7, all assignments are presently completed individually. T2 and T7 are team-based assignments, thus emulating how design projects are often undertaken in the vocational setting. ʶʶ Mining Engineering, MINE4602 (Rudra Mitra): Provision of feedback milestones. The lack of timely feedback milestones can result in students developing considerable resentment to other team members as they complete the project to hand, without any formal opportunity to voice their concerns and remediate the problem, as they would have in an authentic working environment. ʶʶ Chemical Engineering, CEIC4001 (Dr Greg Leslie): Individual student assessment weighting within team-based assessment. This course is the capstone design course in the School of Chemical Engineering. It provides relevant vocational training and is a key component of the program’s accreditation with the Institution of Chemical Engineers and Engineers Australia. Yet a recent course review has identified a lack of effective individual student assessment weighting within the team project based assessment activities. The review has also shown a need to create course activities that are fully in line with the workplace situation; all course activities need to be vocationally authentic, mirroring the role of graduates and their working environments as closely as possible. ʶʶ Computer Science and Engineering, Game Design (Malcolm Ryan): Authentic game design projects and formative assessment feedback. Although this project is aimed at achieving more than assessment improvement, the project will address assessment improvement by providing a range of authentic projects for game design students and PhD

186 Part II

ImprovingAssessmentText2Proof.indd 186

11/11/13 3:36 PM

candidates to work on, and providing games to UNSW clients that, when played by students, will naturally provide formative assessment feedback to the student player, either in the form of the game scoring or through a debriefing or reflective process focusing on what was learnt in playing the game. To date, the project has generated game design briefs from UNSW School of Optometry & Vision Science and UNSW School of Humanities (Philosophy). Presentation of reports about these individual school projects is pending.

The Assessment Project: Conclusions Things that worked best

The Faculty of Engineering is generally happy with the way things have developed, the things that have been achieved by the Assessment Project and the outcomes as against the expectations. From the point of view of having led the Assessment Project process within the faculty, what worked best was the fact that when funding was provided we were given a fair deal of freedom as to how we spent it. That was a big factor in our successes. Being given $100,000 for three years allowed me to employ people and initiate projects but the lack of constraints on that money meant that as I changed my view of what was going to work, for pedagogical or practical reasons, or to obtain buy-in, the funding flexibility allowed me to change the focus of the activities along with that view, and that was key. It is risky for an institution to fund projects with any ambiguity in the outcomes but every faculty is in a different state of development; we all have different priorities and built-in flexibility reflects this awareness. In terms of challenges, no project of this type and scale will work if you cannot get other people to help you. You spend a lot of time working out how to reach people on a motivational level, so

The Faculty of Engineering 187

ImprovingAssessmentText2Proof.indd 187

11/11/13 3:36 PM

that what you want them to do becomes what they want to do. If you come to them with a directive: ‘The University wants you to do this’, half the staff will resist. I spend a lot of time talking to people. I visit the schools once or twice a week and walk around; I know everyone. I work at getting to know people’s interests and motivations. I would know 90 per cent of the academic staff to greet and have a chat with; it is only a few of the newer staff members whom I do not know very well. I manage this by walking around. It looks inefficient but it is effective; it works, if slowly. The next stage: Towards institutionalisation

In terms of assessment, we need to work on rubrics and standards-based marking, which will be hard until more staff are familiar with that way of designing courses. Even if I went through the faculty and set up those things for every single course, in three years time every one of those courses would be taught by a different person from the one teaching it now. The new person either would not understand the point of the course description and the assessment and so on, or they would dislike it, or they would take exception to it because it was not their idea to begin with. I suspect that one would look up in three years time and everything would have disappeared. The only way assessment practices will stay embedded is if each staff member is convinced that those practices are a good way to teach, and automatically continues to assess that way because the benefits are obvious to them.

188 Part II

ImprovingAssessmentText2Proof.indd 188

11/11/13 3:36 PM

9

The Law School and the Assessment Project Alex Steel

UNSW Law (which is a single school faculty so the terms Law School and Faculty of Law are interchangeable) was founded in the early 1970s with a deliberate rejection of prevailing teaching norms. Rather than lectures and tutorials all courses are taught purely in seminar format, most commonly of two 120-minute seminars each week with 44 students in each class. Most staff teach two such classes a semester. For a considerable time after the Law School’s founding, individual teachers were permitted to structure the content and assessment of their classes in each course as they saw fit – the idea being to encourage both innovation and academic freedom. While content and assessment in courses is now standardised, there is a legacy of a light touch approach to this standardisation. As the Law School has grown in size it has worked to maintain this ‘small group’ approach to teaching. One result had been a large number of staff teaching in each offering of a compulsory course, not always with a clear understanding of the place of their course in the program as a whole. Also, with the growth of the faculty, a generational turnover of staff and the increased emphasis on research outputs in recent years, the broader discussion of teaching practice and assessment had declined. In addition, the faculty had not had the opportunity to reflect broadly on its own

189

ImprovingAssessmentText2Proof.indd 189

11/11/13 3:36 PM

practices and to consider how it is situated in the broader educational environment. Both the Bachelor of Laws (LLB) and the Juris Doctor (JD) degrees are accredited as qualifying graduates for legal practice. In order to meet the content requirements of accreditation, two-thirds of the degrees are compulsory courses. Consequently, the primary focus of the Assessment Project was on these courses. Legal practice is overwhelmingly a life of words. Laws, legal judgments, contracts and letters all involve complex concepts reduced to written English. Much of the verbal communication lawyers have with clients, each other and the courts also involves a process of translating life and environments to words. It is therefore not surprising that assessment in law has overwhelmingly been based around writing. Extended academic writing has also long been seen as an efficient way to assess a student’s ability to grapple with complex ideas, apply judgment and place implications in context – while at the same time being able to reduce this to written form. Assessment in law schools has thus been traditionally based around final examinations and research essays. UNSW Law follows this tradition in the main, but with an increasing range of different assessment methods. However, assessment has generally not been seen by staff as something that specifically builds towards a set of agreed outcomes in a formative way other than through the gaining of expertise via repetition. There is an emphasis on assessment for summative purposes. Importantly, the Assessment Project was also undertaken concurrently with the faculty undergoing a fundamental rethinking of its law degree curriculums (Bachelor of Laws and Juris Doctor degrees). This meant that it was possible to consider assessment and content in a more fluid environment and both processes were able to influence each other. Realistically, it was not possible to engage staff with both a fundamental rethinking of content and course ordering in the degree and also a complete re-examination

190 Part II

ImprovingAssessmentText2Proof.indd 190

11/11/13 3:36 PM

of assessment practices. Consequently, the Assessment Project aimed to effect a number of pivotal innovations in assessment, and also to build an underlying basis for ongoing development.

Aims of the Assessment Project in the Law School The overriding aim of the Assessment Project was to begin a cultural change in the way the Law School saw assessment: both raising the level of intellectual engagement with the purposes and effects of assessment in learning; and also making the Law School’s practice of assessment more responsive to both student and professional needs. To do this the project aimed to provide an authoritative evidence base on the nature of assessment in the Law School – both in terms of the forms of assessment set and also in terms of student understandings and attitudes to the assessment. It also sought to create a programmatic understanding of the role of assessment in the degree and its contribution to program learning outcomes. This would make staff and students more aware of how each item of assessment built on earlier assessment and how items are inter-related. Instilling a programmatic sense of assessment among staff aimed to increase the sense of particular skills and values being developed throughout the degree rather than within individual courses. It aimed to help to encourage the development of alternative forms of assessment that could be seen to complement other assessment in different courses, particularly those related to developing professional values. The opportunity to do this was increased by the development of new courses in the revised curriculum. The project also sought to develop a rigorous intellectual basis for considering the continued use or introduction of key forms of assessment by both extensive literature reviews and also analysis of survey findings.

The Law School and the Assessment Project 191

ImprovingAssessmentText2Proof.indd 191

11/11/13 3:36 PM

The process followed by the Law School Due to the size of the curriculum review process in the faculty, no staff were practically available to give substantial commitment to the Assessment Project. As a result, funding provided by the university was used to employ two staff. Both had strong connections to the issues under consideration. One was a law graduate who had returned as a sessional teacher and was completing a research degree in student wellbeing, and the other was current JD student who had a degree in education and significant work experience. Together with the Associate Dean (Education), the Director of Learning and Teaching and the faculty’s Learning and Teaching Fellow (who was also an experienced student academic adviser), the team had a good balance of teacher and student perspectives on assessment practices. Much of what was undertaken and achieved in the Assessment Project was due to the efforts of this team. One significant factor in the success of the project was ability to undertake the project’s tasks – such as curriculum mapping – in ways that were seen to be complementary to the assessment review process. In fact many staff may have not initially realised that two projects were running concurrently. Overall this meant that the initiatives of the project did not run into any real opposition, but on the downside the impact of the initiatives may have been more hidden because of the spotlight on curriculum review. Time will tell whether acceptance remains high as the policy changes roll out. Employing dedicated staff was an efficient way to achieve the project’s goals rather than attempting to second already busy faculty staff, but one downside was a siloing effect of much of the knowledge gained. With the end of funding, those staff are likely to be lost to the faculty. While reports and updates were regularly produced for faculty information, in hindsight, establishing a steering group of academic staff who could have reviewed progress regularly might have been a way of ensuring a broader and deeper awareness of the project’s work.

192 Part II

ImprovingAssessmentText2Proof.indd 192

11/11/13 3:36 PM

Building an evidence base: Discovering the current approaches to assessment A first objective for the Law School was to become aware of the current landscape of assessment and how certain assessment types are privileged over others. This was intended to provoke a discussion over whether some forms of assessment were overly repeated and whether this led to over assessment of some learning outcomes rather than others. To enable this discussion, a comprehensive analysis of all summative assessment in 2010 was undertaken, a process similar to that undertaken across the rest of UNSW. The audit involved analysing every course outline for courses taught in both semesters, compulsory and elective. Set assessment was extracted and grouped according to standard assessment types. However, analysis of the results demonstrated that the level of analysis of the assessment types was limited in significant ways. For example, the audit put together a range of writing tasks into one category of ‘extended writing’ but failed to identify more specifically what forms of writing this amounted to. Given that much of legal practice involves extended writing, more granularity was needed to determine whether the forms of writing were appropriately balanced. This led to three further audits designed to expose the degree of diversity in assessment practices. The first audit involved retrieving every examination paper set in the courses audited in 2010 and further disaggregating those exams by type of question asked and degree of scaffolding provided. This led to a number of key findings. Overwhelmingly, examinations involved either a problem question alone or paired with an essay style question. Examinations were largely in the compulsory core rather than the electives. The degree of scaffolding varied across the stages of the degree and often was higher in later courses. On closer inspection, this was often because students were being directed not to analyse certain issues and instead to concentrate on particular aspects. As such,

The Law School and the Assessment Project 193

ImprovingAssessmentText2Proof.indd 193

11/11/13 3:36 PM

the scaffolding appeared to increase the complexity. These findings provided important background to the parallel curriculum review process and informed discussion on what should be the appropriate balance of assessment. The second audit involved collecting all the data from course outlines that related to the assessment of class participation and following that up with a call for staff to supply any additional information that they handed out separately. The aim was to assess the range of forms of assessment of class participation and the degree of use of feedback against criteria (often through rubrics). The findings of this audit then led to a report on the range of approaches to class participation in the Law School, the distillation of these practices into four main typologies of class participation and some criterion-based rubrics based on the best practice examples collated, further discussed below. The third audit involved asking staff to outline the nature of any forms of assessment that did not fall within the assignment, exam, participation triad. This information was then collated and presented to the faculty as a report on innovative assessment practices, in order to encourage further consideration of assessment development. Many of the assessment types were also collated in the typologies of assessment document, discussed below.

Building an evidence base: Discovering the student experiences of assessment Complementing the audit of the nature of assessment set, two comprehensive surveys of students were undertaken. One focused on attitudes to assessment types, the other on broader issues of student engagement with learning. The Law School Assessment Survey

In mid 2012 all LLB and JD students were sent a comprehensive survey on the key forms of assessment set in the Law School: class

194 Part II

ImprovingAssessmentText2Proof.indd 194

11/11/13 3:36 PM

participation, group work, scenario-based problem assignments, essays and exams. They were asked about their experiences of, and attitudes towards, those forms of assessment. There was a response rate of 10-15 per cent, many with extensive text comments. Key findings included that students were extremely happy with problem assignments as the fundamental assessment tool and were reasonably happy with exams and essays. Two-thirds considered class participation enhanced their learning and improved their oral skills and were in favour of its retention as a marked form of assessment but a significant number found it stressful and were uncertain of the criteria on which it was marked. Students generally accepted that group work was a valid form of assessment but were strongly resistant to any increase in its use. In terms of feedback, significant numbers of students reported that written feedback on essays was a significant method for self-improvement. These findings, particularly when broken down by cohorts, will provide important empirical evidence to back up discussions about the appropriate mix and use of assessment. They provide a balance to assumptions of staff and extrapolations from literature in other fields. The survey of law student attitudes to class participation appears to be particularly significant. As far as we know it is the only survey of this kind since a similar survey of UNSW law students in the 1970s. The Law School Survey of Student Engagement

In November 2012, after a year of planning and negotiation, all law students were invited to participate in the first Australian version of the Law School Survey of Student Engagement (LSSSE). LSSSE is a US-based survey of law student engagement conducted since 2002 and has involved to-date 178 law schools in the USA and Canada. It is based on the NSSSE survey of US college students, with the questions tailored to issues relevant to law students. The UNSW survey was further tailored for Australian law students. UNSW Law School is the first law school outside of North

The Law School and the Assessment Project 195

ImprovingAssessmentText2Proof.indd 195

11/11/13 3:36 PM

America to be involved and it is hoped that the success of the survey as part of the Assessment Project will lead to broader involvement by other Australian law schools. The survey covers a range of issues associated with student engagement, with a focus on engagement with their assessment, including how much time they devote to assessment and preparing for class, what critical reasoning skills they think develop through their degree, and their career intentions. These attitudes are critical to an understanding of what motivates students to do their best in assessment and also provide indications as to what aspects of their study they most value. Four hundred and forty-eight UNSW students completed the survey, which had 111 items. The results were also able to be compared to 25,000 North American responses, giving for the first time a direct international comparison of the Australian and North American law student experiences. Combining the surveys

Data from the audits and both surveys now provides a detailed insight into the student experience of assessment and UNSW Law will be able to develop strong evidenced-based responses and justifications for the assessment regimes in its degree programs. This degree of empirical data on assessment is unparalleled in the faculty’s history. It will provide a firm basis for ongoing discussions about efficient and effective assessment strategies and to benchmark the efficacy of changes made.

Building an evidence base: The research database and rationales for assessment In parallel with the audits and surveys, a comprehensive review of the literature on the key forms of assessment in law schools was undertaken. This included both publications by legal education scholars and also education scholarship more broadly. This research led to three outcomes.

196 Part II

ImprovingAssessmentText2Proof.indd 196

11/11/13 3:36 PM

First, a database of informative and useful articles was compiled, containing publication details, abstracts and links to online databases. This was organised into topic headings and placed on the Law School’s internal website as an ongoing resource for staff. The intention is that the database can be used by staff wishing to write up their teaching practices for publication, to provide easy access to key articles as part of professional development, and as an easy introduction to the scholarship of teaching and learning for new staff. To that end a ‘Top 5’ list has also been created in key areas. It is intended that staff will add to the database over time. Second, the knowledge generated by this literature review was collated into a number of discussion papers for law staff that theorise on and critique key assessment methods. It is intended that these discussion papers will be then compared to the survey results to inform further use and development of these forms of assessment, and the resulting analysis will be made publicly available. Third, the literature review informed the development of assessment typologies and feedback criteria. Building clarity: Assessment typologies and criteria-based feedback

We hypothesised that one significant stress for students, and communication difficulty for staff, was the variation in format and grading given to forms of assessment that have the same name. Thus class participation can involve free-form contributions, prepared answers, formal presentations, and so on. This was confirmed by both the results of the student assessment survey and the variety of approaches to class participation we found in the 2010 audits and underpinned by the range of practices described in the academic literature review. Consequently, an aim of the Assessment Project was to increase the efficiency of explanation of assessment formats and more explicitly link learning outcomes to various forms of assessment by describing typical variants to the main forms of assessment and producing associated default feedback criteria

The Law School and the Assessment Project 197

ImprovingAssessmentText2Proof.indd 197

11/11/13 3:36 PM

(rubrics). These could then be made available to students as signposts to the expected assessment in the program and also as default explanations to be adopted by staff in setting assessment. The goal is to approach a degree of standardisation of assessment forms from the bottom up but in a way that respects the professional ability of teachers to develop their own adapted forms of assessment. This creation of typologies has significant merit and efficiencies in itself. It also contributes to the broader goal of a more programmatic understanding of assessment by unpacking the different learning outcomes of different variants of assessment types. This is necessary to allow a discussion on what learning outcomes are to be assessed in the degree program. Initially it had been thought that by separating out the types of assessment tasks, distinct rubrics would become clear for each type. However, it became clear that while the format of the tasks might differ, the underlying learning outcomes remained largely similar. Because of this, it was decided to develop generic criterion rubrics for the main assessment types and leave staff to demonstrate the different emphasis given to the criteria in different forms of an overall type of assessment. The rubrics were discussed by the faculty at its 2012 retreat and endorsed in 2013 as appropriate generic statements of how those tasks would be assessed. Where appropriate, all courses will now use these rubrics or alternatives in describing and providing feedback for assessment tasks. Feedback from staff and students in 2013 will determine their final forms.

Building clarity and purpose: Revised learning outcomes, graduate attributes and comprehensive course mapping The development of a new compulsory law curriculum at the same time as the Assessment Project provided an opportunity to embed alignment of assessment as part of the development of the new courses. This was undertaken in a multi-layered approach. The aim

198 Part II

ImprovingAssessmentText2Proof.indd 198

11/11/13 3:36 PM

was to produce clear explanations of how assessment built towards the learning outcomes of the degree – not only to assist students but also staff. Program learning outcomes

The first stage was the development of a discipline-wide set of six graduate attributes, known as threshold learning outcomes (TLOs). The Law School played a significant role in the articulation of these standards in 2011 (LLB) and 2012 (JD). This provided a generic basis for the Law School’s curriculum review. Staff were asked to articulate what they considered to be the key qualities graduates should have – and be guaranteed by assessment. These suggestions were then matched to the TLOs and an expanded set of learning outcomes drafted, known as the program learning outcomes (PLOs). What was important in this process was the development of the PLOs from the ground up. This meant that it became clear that certain graduate qualities that were able to be captured in the general TLOs were so important to the Law School’s identity and the character of its graduates that they were best captured in separate PLOs. It also became clear that it was possible to group these PLOs into three key pillars of the degree: knowledge, analytical skills and professional skills: ʶʶ Knowledge: Graduates of UNSW Law will understand and appreciate: »» legal knowledge in its broader contexts »» Indigenous legal issues »» principles of justice and the rule of law. ʶʶ Analytical skills: Graduates of UNSW Law will have developed the skills of: »» statutory interpretation and analysis »» legal reasoning »» legal research and writing »» reform-oriented analysis of law and policy »» application of interdisciplinary perspectives to legal issues.

The Law School and the Assessment Project 199

ImprovingAssessmentText2Proof.indd 199

11/11/13 3:36 PM

ʶʶ Professional skills: Graduates of UNSW Law are professionals with: »» communication skills »» interpersonal skills »» professional and ethical dispositions and values »» capacities for self-management. This list of learning outcomes has significantly clarified the intent of assessment and learning in the law degrees. It does so in a concise manner helpful for those external to the faculty. However, what it cannot do is provide any detailed guidance to staff and students around how these outcomes might be achieved. As a result, a third tier was developed underneath the PLOs. This tier constitutes the course learning aims. This much longer list constitutes a series of possible routes towards achieving each PLO, representing approaches taken in various courses. The list is not intended to be closed or prescriptive. Instead it aims to flesh out the PLOs for students and staff and encourage understanding of what could form part of the generally worded PLOs. For example, PLO 3 currently contains the following course learning aims: ʶʶ principles of justice and the rule of law ʶʶ tensions between law and justice/morality ʶʶ distributive justice, including social justice ʶʶ corrective justice ʶʶ justice as desert and retributive justice ʶʶ restorative justice and ADR ʶʶ legal protection of rights, including by the Constitution ʶʶ legal practitioners’ duties to clients, the court and society ʶʶ due process and natural justice ʶʶ the maintenance and operation of the rule of law ʶʶ constraints upon government power and the review and correction of government decisions ʶʶ the rule of law in context.

200 Part II

ImprovingAssessmentText2Proof.indd 200

11/11/13 3:36 PM

Mapping courses to the program learning outcomes

Having developed tiered outcomes, all learning outcomes and assessment in the newly revised curriculum were mapped to the outcomes. As part of the mapping process, all compulsory course descriptions were revised into a standard format and all learning outcomes recast as active student-centred activities. Learning outcomes were required to be warranted by an item of assessment. Staff were then asked to nominate to which PLO the learning outcome mapped. In conjunction with this process, staff were also asked to nominate the top three PLOs that were associated with each item of assessment. To avoid this process being seen as a negative compliance process, a member of the Assessment Project team sat with each course convenor and discussed the vision of the course, translating that into learning outcomes tied to assessment. Overwhelming feedback was that staff found this an inspiring process of reflection on the aims of the course that had benefits independent of the mapping process itself. The mapping was compiled in an Excel spreadsheet developed by the Learning Unit at the University of Technology Sydney. This process led to three different mapping outcomes, presented at the 2012 faculty retreat. The first chart set out the quantity of assessment items set for students in the compulsory part of the degree to illustrate the degree of repetition and provide a contrast to the results of the 2010 exercise. This demonstrated that there had been a significant shift towards more professionally oriented forms of assessment, but that more work could be done to provide variety and increasing levels of complexity to final examinations. The second chart mapped the degree to which the PLOs were ‘hit’ by the learning outcomes. Given the restriction of only one PLO for each learning outcome, this produced a heuristic sense of the main emphases in the degree and provoked significant discussion. The second document (Table 9.1) mapped the top three PLOs for each assessment item. This provided a more nuanced sense of

The Law School and the Assessment Project 201

ImprovingAssessmentText2Proof.indd 201

11/11/13 3:36 PM

Table 9.1  Mapping of program learning objectives against assessment items % Assessment tasks mapped

62

5

13

33

56

44

21

3

28

16

10

10 Yr 5, S1

Business Associations

3

Court Process, Evidence and Proof

1

2

1

1

1

2

3

1

1

1

1

3

1

1

1

1

1

1

Resolving Civil Disputes

3

1

1

1

1

1

Federal Constitutional Law

1

2

2

1

Land Law

2

1

1

2

1

1

Equity andTrusts

2

3

1

3

1

1

Lawyers, Ethics and Justice

?

?

?

?

Administrative Law

2

1

2

2

Contracts

2

2

1

1

Principles of Private Law

2

3

3

Defining Crime

4

1

2

1

Criminal Justice

3

1

1

2

1

Principles of Public Law

1

1

1

1

1

Torts

2

2

2

2

Introducing Law and Justice

5

1

1

2

1

4

PLO

PLO 2.3: Research and writing skills

1

1

PLO 2.2: Legal reasoning

2

1

PLO 2.1: Statutory interpretation and analysis

Law and Practice in a Global Context

2

PLO 1.3: Principles of justice and the rule of law

2

PLO 1.2: Indigenous legal issues

1

PLO 1.1: Legal knowledge in context

Theory courses

1

1

1 1

2

1

Yr 4, S2

1

1

Yr 4, S1 1 1 Yr 3, S2

? 2

1

? 1

? 1 Yr 3, S1

1 1

1

2

1

1

2

1

1

1

1

1

1

1

1 PLO 3.2: Interpersonal skills

1

PLO 3.1: Communication skills

1

1 Yr 2, S2

Yr 2, S1

ImprovingAssessmentText2Proof.indd 202

1 1

Yr 1, S2

PLO 3.4: Self-management

Yr 1, S1 PLO 3.3: Professional and ethical disposition and values

PLO 2.5: Interdisciplinary perspectives

2

PLO 2.4: Law reform and policy analysis

1

11/11/13 3:36 PM

, S1

, S2

, S1

, S2

, S1

, S2

, S1

, S2

, S1

what skills or knowledge was being emphasised to students in the choice of assessment. Importantly, it highlighted very clearly what PLOs two new pivotal innovations in the degree (a new course on Lawyers, Ethics and Justice and an integrated legal skills component running vertically through the degree) would need to assess – an important aid to the development of those courses. It also demonstrated that some of the PLOs considered to be fundamental to the character of the degree were far less heavily assessed than those that developed professional skills. Discussion of this finding led to the recognition that these PLOs were in fact repeatedly assessed as part of legal reasoning and writing tasks, but that more could be done to make students aware of the importance of these aspects of the tasks. To take one example, PLO 2 Indigenous Legal Issues, was central to much of the assessment of criminal law courses but this had not been captured in the mapping exercise. Additionally, staff were asked to consider their courses within degree-wide themes. This was both to overcome this limitation of the mapping process and to further emphasise the cultural shift from assessment as being relevant only to the content of the course in which it is set to a broader understanding of how assessment develops student capacity throughout a degree. An initial set of themes – aimed to complement the emphases in the courses and overlap the PLOs – included Indigenous Legal Issues; Human Rights; Justice and the Rule of Law; Environment, Class, Gender, Race and Disability Issues; Experiential Learning; and Personal and Professional Development. These themes were written up into documents that provided a narrative of how the themes ran throughout the degree, linking between courses and explaining their role in assessment. These theme documents will develop over time as the new curriculum is taught, and more themes are likely to be added.

The Law School and the Assessment Project 203

ImprovingAssessmentText2Proof.indd 203

11/11/13 3:36 PM

Building innovation: New forms of assessment One important aspect of the curriculum review was a belief that that way the faculty teaches law should shift towards a greater emphasis on practical skills and professional values, a move that is currently also occurring in the United States. Part of the underlying aim of the assessment audits and mapping was to demonstrate the extent to which existing assessment was capable of developing those broader learning outcomes. In doing so it became clear that the program would benefit from the introduction of new forms of assessment and a stronger emphasis on formative elements. Key developments over the life of the project have been: ʶʶ experiential learning ʶʶ group assessment ʶʶ online formative assessment ʶʶ values-based assessment. Experiential learning

The faculty has made a commitment to placing experiential learning at the centre of the degree and further consolidating its position as the leader in providing such learning to most, if not all, of its students. In doing so, the faculty has committed itself to growing the internship program to the point where every student who wishes to take an internship can do so. This was achieved in 2012 with a majority of the relevant cohort undertaking internships – and more internship places offered than were taken up. As part of this growth, the faculty has sought to make the assessment of internships more rigorous and reflective. Consequently, the faculty has: ʶʶ articulated an experiential pathway through the degree, with mandatory court visits in early years, role-playing in middle sections and an opportunity to take internships at end of the degree ʶʶ developed an assessment strategy for this stream that enhances

204 Part II

ImprovingAssessmentText2Proof.indd 204

11/11/13 3:36 PM

students’ reflective learning without increasing staff costs through the introduction of small practice group classes ʶʶ placed a stronger emphasis on the centrality of workplace experience for interpreting legal materials and classroom discussions ʶʶ developed a clear statement of the role of experiential learning in the Law programs for dissemination to students and staff. From 2014, course outlines will highlight assessment related to experiential learning. Experiential learning in these assessments is clearly linked to learning outcomes and graduate attributes. Group assessment

Group work has not been widely adopted as an assessment form in Law. One learning outcome that is now sought after by the profession is the ability to work and communicate as a member of a team. The student survey revealed ambivalence about assessment of group tasks but recognition that it was an important professional skill. As part of the Assessment Project, the faculty has now developed a theoretical justification for group work in law, collated examples of best practice and developed a generic rubric for student self-assessment of the process of group work. That theoretical framework involves progression from cooperative through collaborative to team-based small group activities. This now means that the faculty can articulate its approach to the development of group work competency in the curriculum as follows: ʶʶ Formative co-operative group work will be encouraged as part of in-class teaching in earlier year courses. ʶʶ Collaborative group work assessment tasks will be used in Equity and Trusts; Resolving Civil Disputes; and Court Process, Evidence and Proof, all of which are compulsory core courses as part of the new LLB and JD degrees. In Equity and Trusts, a group-based trust deed drafting exercise will

The Law School and the Assessment Project 205

ImprovingAssessmentText2Proof.indd 205

11/11/13 3:36 PM

constitute the mid-semester assignment; in Resolving Civil Disputes, small groups of students will place themselves in the position of legal practitioners to advise on dispute resolution options and undertake a drafting and advocacy assessment; and in Court Process, Evidence and Proof, the mid-semester group assessment task will provide an opportunity for students to observe and reflect upon professional dispositions witnessed during court observations. ʶʶ Later year electives can then build team-based group assessment into their teaching strategies. The review of elective teaching and assessment is a major project for 2013. Online formative assessment

The combination of program simplification and curriculum review led the Law Faculty to the decision to no longer teach Legal Research and Writing as a stand-alone course. UNSW Law had been a leader in Australia in creating such a course to give appropriate emphasis to such skills. However, the utility of online forms of formative assessment and the advantages of adopting a just-intime approach to skills development, in association with other assessment tasks, led the faculty to move to integrating development of such skills throughout the entire degree and to support those skills with online modules and formative assessment. Work on this development will be a major project for 2013. Values-based assessment

A fundamental concern in US legal education is an acknowledged lack of ethical awareness and professional values in law graduates. UNSW Law has long had a reputation for producing graduates who possess such values but that was largely based on the inculcation of such values by osmosis. With a larger student body and

206 Part II

ImprovingAssessmentText2Proof.indd 206

11/11/13 3:36 PM

increased use of sessional staff this has been recognised as insufficient and difficult to measure. Concerns in the US and moves there to mandate such aspects of legal education have formed a catalyst for the faculty to reaffirm the importance of those values and to more explicitly incorporate them into the degree program. One significant way of doing this is to design assessment tasks that require awareness of professional values and an emerging ability to apply such values. This will be achieved in 2013 three ways: ʶʶ Existing assessment will be tweaked to emphasise the realworld professional implications of the analysis students undertake. ʶʶ A new foundational compulsory course on Lawyers, Justice and Ethics will be designed around professional values and explicitly linked to other compulsory courses. Assessment in that course will be developed with this in mind. ʶʶ New, explicitly reflective assessment around professional values will be developed to complement the new Lawyers course.

Reference Law School Survey of Student Engagement, LSSSE, .

The Law School and the Assessment Project 207

ImprovingAssessmentText2Proof.indd 207

11/11/13 3:36 PM

10

The Faculty of Medicine: Diversity, validity and efficiency of assessment Philip Jones, Lois Meyer, Rachel Thompson and Glenda Lawrence

A broad range of programs and courses are taught in the Faculty of Medicine, thus assessment needs and practices are diverse, ranging from evaluation of science and clinical graduate capabilities to interpersonal and interactional abilities. The faculty had reviewed and enhanced assessment practices over the previous decade in several programs. The Assessment Project provided the impetus to strategically review, evaluate and further enhance assessment practices where gaps were identified. The targeted approach taken by the faculty led to multiple projects where the common outcome was a focus on scholarly activities to promote a greater understanding of assessment principles and practice. The project resulted in significant improvements in assessment methods in both undergraduate and postgraduate programs and a greater appreciation of the principles of, and good practices in, assessment. New developments, including greater use of technology, are expected to lead to further improvements in the quality and efficiency of assessments across the faculty.

208

ImprovingAssessmentText2Proof.indd 208

11/11/13 3:36 PM

Faculty background and structure The Faculty of Medicine was formally established in 1960 for the purpose of delivering an undergraduate medicine program. Although the primary focus of teaching remains medicine, the faculty’s teaching activities now include a second undergraduate program in exercise physiology, a range of postgraduate coursework programs in eight health-related disciplines, and multiple courses in biomedical sciences delivered in Faculty of Science programs. In 2013, over 3000 students are enrolled in Faculty of Medicine coursework programs, including 1633 medical students, 355 exercise physiology students and 1027 postgraduate coursework students. Overall, approximately 20 per cent of undergraduate students are international, mainly from the Asian region. Postgraduate international students are from many different counties, with the Asia-Pacific, African and the Middle Eastern regions strongly represented. The faculty comprises nine separate schools. Two schools, the School of Medical Sciences and the School of Public Health and Community Medicine (SPHCM), are located on the UNSW campus. The other seven schools, including five clinical schools, are located within 12 teaching hospitals associated with the medicine program. There are also 20 separate research institutes and centres affiliated with the faculty. The Faculty of Medicine employs over 1200 academic, professional and technical staff, while over 2000 clinical conjoint academic staff provide an invaluable contribution to the Medicine program. Most academic staff are actively undertaking research and the faculty fosters strong linkages between teaching and research roles so that teaching is informed by research. There is also a focus on professional development in both research and teaching. The authors all have significant roles in learning and teaching. There is a Learning and Teaching Fellow appointed at faculty level (Rachel Thompson), as well dedicated learning and teaching staff

The Faculty of Medicine 209

ImprovingAssessmentText2Proof.indd 209

11/11/13 3:36 PM

in the largest schools, including the SPHCM (Lois Meyer). The ADE (Philip Jones) has overarching responsibility for the delivery of quality programs in the faculty, while the Associate Dean (Postgraduate Coursework), Glenda Lawrence, is responsible for the range of postgraduate programs taught by the Faculty of Medicine.

Faculty teaching and assessment practices The Faculty of Medicine embraces the curriculum model of constructive alignment (Biggs and Tang, 2011) in program design, delivery and assessment; where students are assessed on course level learning outcomes that contribute to the achievement of defined program level graduate capabilities (see Figure 10.1). This model has been progressively implemented into Medicine programs over the past decade. There is a broad range of assessment needs within the Faculty of Medicine due to the diversity of courses within the programs taught by the faculty, differences in the duration and structure of programs and the range of professional capabilities students need to demonstrate prior to graduation. These span science-based and clinically focused capabilities as well as interpersonal and interactional abilities. Several postgraduate qualifications have a strong focus on the social sciences and this further diversifies assessment needs within the faculty. An entirely new Medicine program was introduced in 2004, following an extensive review of the curriculum. The program is outcomes-based, with all learning and assessment activities aligned to eight professional capabilities. Students move through three phases across a six-year program. In Phase 1 (years 1 and 2) the focus is primarily on basic and clinical science, learning key skills and introducing students to medical practice (clinical skills, ethics and evidence-based practice). In Phase 2 (years 3 and 4) students move on to learning within a more clinical environment and more applied knowledge; in Phase 3 (years 5 and 6) students consolidate

210 Part II

ImprovingAssessmentText2Proof.indd 210

11/11/13 3:36 PM

Figure 10.1  Model of constructive alignment (adapted from Biggs and Tang, 2011) Program graduate capabilities

Course learning outcomes

Assessment task

Assessment task

Assessment task

Assessment criteria

Assessment criteria

Assessment criteria

Marking and feedback

their knowledge and skills from earlier years and focus on gaining clinical experience and skills within the clinical practice setting. In each phase students must meet requirements for each of the eight capabilities to progress and ultimately graduate. Multiple methods of assessment are employed across the three phases of the program to collectively monitor and document students’ progress in each capability against predetermined standards. These include knowledge-based, competency-based and workplace performance–based assessments. Assessment items (e.g. written exam questions, assignments, clinical skills stations) are mapped to the graduate capabilities to ensure the alignment of assessments

The Faculty of Medicine 211

ImprovingAssessmentText2Proof.indd 211

11/11/13 3:36 PM

to the outcomes. Hence, course and end-of-phase assessments not only address specific course and phase objectives but also provide evidence of achievement in the graduate capabilities. Students are also required to provide evidence of their achievement in each of the capabilities in an e-portfolio that they submit together with a self-assessment of their progress, in the form of a reflective essay, for review at the end of each phase. Using a programmatic approach to assessment ensures that the graduate capabilities represent authentic outcomes, which provide the focus for students’ learning. The emergence of exercise physiology as a health discipline led to the establishment of a new program in Exercise Physiology in 2010. The program evolved from a science program that focused on exercise science to adopt a greater focus on the clinical practice of exercise physiology. Assessment practices within the new Exercise Physiology program were essentially a continuation of those in the science program but the need for different methods for assessing clinical competency was recognised. The range of postgraduate coursework masters programs, in eight different disciplines, focus on professional extension for medical, nursing, allied health, behavioural sciences and medical sciences graduates in areas such as public health, health services management, reproductive medicine, drug development and forensic mental health. Assessments are designed to draw on students’ existing health professional knowledge and support them to develop new advanced capabilities within their area of professional health specialisation by focusing on authentic practice-based tasks.

The Faculty of Medicine and the Assessment Project The university Assessment Project provided the opportunity for Medicine to review and evaluate assessment practices within its teaching programs, recognising that these were at different stages of maturity and there was a diversity of assessment needs across the faculty. The faculty adopted a targeted approach to improve the

212 Part II

ImprovingAssessmentText2Proof.indd 212

11/11/13 3:36 PM

quality and efficiency of assessment practices rather than a comprehensive approach of reviewing all assessment practices in the faculty. The Assessment Project was coordinated by the ADE and Associate Dean (Postgraduate Coursework) (AD(PGC)). The ADE was primarily responsible, in consultation with the relevant program authorities, for determining the most important areas needing attention in the undergraduate programs. The timing of the project coincided with the accreditation of the Exercise Physiology program, which also required a review of assessment practices in the program. This review was conducted with the assistance of the external consultant, Professor Jan Orrell, appointed by the university to support the project. The AD(PGC) identified the Masters of Public Health, Health Management and International Public Health, taught by the SPHCM, as the focus of the Assessment Project. Over 70 per cent of postgraduate coursework students in the faculty were enrolled in one of these programs, student needs were diverse with both international and domestic students who study on campus and by distance, and the L&T Fellow had worked closely with academic staff on assessment practices over several years. Prior to the Assessment Project, the faculty L&T Fellow had implemented a number of measures to enhance the scholarship of L&T in the faculty. This included establishing a special interest group of faculty academic staff with an interest in medical education. The group was supported by regular seminars, workshops, newsletters, a website and an annual forum showcasing research in medical education. During the period of the Assessment Project, the activities of the group focused on assessment. Support from the faculty for focusing on scholarship of L&T and assessment was reflected in a series of curriculum projects undertaken within the SPHCM prior to the Assessment Project and led by the school’s Postgraduate L&T Fellow. Through a series of UNSW strategic learning and teaching grants, the school

The Faculty of Medicine 213

ImprovingAssessmentText2Proof.indd 213

11/11/13 3:36 PM

developed a range of new approaches to embedding online scenario-based assessment strategies within its courses as part of its commitment to professionally focused practice-based assessment. Below, we describe targeted assessment projects and their outcomes in three areas: 1 Undergraduate programs in Medicine and Exercise Physiology. 2 Postgraduate programs in Health Management, Public Health and International Public Health. 3 Promotion of scholarship across the faculty.

The undergraduate assessment project The Medicine program is managed at faculty level, with almost all courses ‘owned’ by the faculty and not by schools. This approach has ensured that the delivery and assessment of all courses is standardised across schools and clinical sites. The specific issues raised by the project relating to the Medicine program were to determine whether the assessment approach was achieving quality outcomes and how the efficiency of assessments across multiple sites could be improved. In response to the university project, the faculty sought to evaluate the assessment practices in the Medicine program and to develop new technologies to improve the efficiency of examinations. The Exercise Physiology program is also managed at the faculty level, although the School of Medical Sciences delivers the majority of the courses. The successful implementation of the Medicine program using a centralised approach provided a model for the new Exercise Physiology program. The university Assessment Project raised different challenges for Exercise Physiology as there had been limited integration of assessments in the previous Science program, and the introduction of clinical practicums required different assessment methods.

214 Part II

ImprovingAssessmentText2Proof.indd 214

11/11/13 3:36 PM

Assessment practices in the Medicine program

The project focusing on the Medicine program was an analysis of all the student outcomes from all assessments performed over the first five years since the new program commenced. A comprehensive analysis of all assessment data using traditional and modern psychometric models was performed to confirm the reliability and validity of the assessment methods. The assessments include course-based assessments and summative assessments at the end of each two-year phase in the six-year program. The principal findings and subsequent actions are noted below. Phase 1: ʶʶ The end-of-course examinations for all courses in Phase 1 (years 1 and 2) provided a reliable measure of a student’s performance at the level of the course as well as overall performance in the phase. There were no recommendations to change the format of the Phase 1 course assessments. ʶʶ The end-of-phase 1 written examinations were reliable and also correlated with student performance throughout the phase. However, the summative practical examination which primarily assessed students’ learning from practical classes was less reliable and was changed to more frequent continuous assessments during the phase. ʶʶ The end-of-phase 1 clinical skills examination was reliable, although the grading criteria for students’ performance in different domains showed some internal inconsistency and were modified. Phase 2: ʶʶ The end-of-phase 2 integrated clinical examination, which consisted of two parts, an assessment of clinical skills and an assessment of knowledge in a viva format, was reliable overall although the viva component was less reliable. Accordingly, the viva component was omitted from this examination and a new written examination introduced. This also significantly improved the efficiency of this component of assessment

The Faculty of Medicine 215

ImprovingAssessmentText2Proof.indd 215

11/11/13 3:36 PM

in moving away from staff-intensive viva assessments to a computer-based multiple choice question format. Phase 3: ʶʶ Workplace-based assessments in the clinical clerkships in Phase 3 were intrinsically less reliable as they are more dependent on subjective assessments made by supervisors. However, multiple course assessments across the two years of Phase 3 provided a reliable summation of student performance. ʶʶ The end-of-Phase 3 integrated clinical examination was reliable and correlated with overall performance of students throughout Phase 3. There was less correlation with discipline-specific course results and the corresponding discipline-specific results in the final examination. In order to improve the reliability of course assessments, more detailed rubrics were developed for clinical supervisors. Whole program: ʶʶ The portfolio examination, which is completed progressively across the whole program, monitors and assesses students’ achievements against the graduate capabilities. Overall, the portfolio examination was reliable and showed significant correlations with other assessments. However, assessment of specific graduate capabilities was less reliable, particularly those relying mostly on student reflections. The assessment criteria for several of the capabilities were modified to improve reliability. Overall the programmatic approach to assessments in the Medicine program provides a valid and reliable approach to assess the graduate capabilities.

216 Part II

ImprovingAssessmentText2Proof.indd 216

11/11/13 3:36 PM

Assessment of clinical competency in the Exercise Physiology program

The faculty identified assessment in the Exercise Physiology program as a priority for review given the incremental changes to the program in preceding years, which included an increasing emphasis on clinical learning. The review recommended that the assessment of clinical competency needed further development and that workplace-based assessments for students on clinical practicums should be introduced. In response to these recommendations, the Faculty investigated the use of assessment methods used in the Medicine program for the Exercise Physiology program. An objective structured clinical examination (OSCE) was introduced as part of the assessment of the clinical practicum course in the final year of the program. The OSCE is a standardised method for assessing clinical competence in a simulated clinical setting. The use of an OSCE to assess clinical competence in Exercise Physiology had not been previously reported. The examination was designed to assess clinical skills, including communication skills and students’ skills in the use of various technical apparatuses in exercise physiology. The faculty ran three pilot exams to design and modify an Exercise Physiology OSCE. Following each pilot, staff and student feedback resulted in progressive refinements to several elements in the design and implementation of the OSCE. The grading system was refined so that students’ skills were assessed in three domains: communication skills, clinical skills and technical skills (i.e. use of exercise equipment). These changes were associated with an improvement in reliability as estimated by Cronbach alpha, which increased from 0.52 in the first pilot to 0.86 in the third pilot. Generalisability studies demonstrated that the major sources of variance were student ability and station content. It was shown that the combination of seven stations and three domains achieved satisfactory generalisability and that increasing the number of stations or domains would not improve the reliability of the

The Faculty of Medicine 217

ImprovingAssessmentText2Proof.indd 217

11/11/13 3:36 PM

examination significantly. The final design of the OSCE was subsequently included as a summative assessment in the final year clinical practicum course. Following the successful development of the final year OSCE, a competency-based assessment has also been included in an earlier course. Structured formative and summative assessments of exercise testing skills were added to two third year courses and a clinical viva and skills test was also developed and introduced for another two third year clinical courses to provide more authentic assessments aligned to the course objectives. Workplace-based assessments, including a supervisor’s assessment based on observation of the student’s performance and a case-based audit of clinical cases assessed by the students, were also introduced in the clinical practicum courses. Development of innovative assessment practices in undergraduate programs

The faculty used funding from the university Assessment Project to support the development of innovations in assessment in undergraduate programs and courses. These projects aimed to develop technology to improve the efficiency of assessments and to develop several novel approaches to assess graduate outcomes not readily assessed by conventional tests. These approaches are described below.

Online assessment item bank The faculty initiated a project to develop an assessment item bank (AIB) as part of the its bespoke curriculum management system. The purposes of the AIB were: ʶʶ to streamline the development, classification, review and storage of assessment items, including multiple choice questions, short-answer questions, simulated stations for clinical skills exams, and stations and items used in viva examinations; the process would ensure that all assessment

218 Part II

ImprovingAssessmentText2Proof.indd 218

11/11/13 3:36 PM

items were aligned with graduate capabilities and related to learning activities ʶʶ to allow the construction and delivery of knowledgebased examinations by traditional paper-based formats and computers. The AIB was released for use in 2011. The functionality of the AIB includes: ʶʶ Staff can author items online and submit for review and approval. ʶʶ A defined workflow provides a quality assurance process for items and exams before approval and use. ʶʶ Storage of assessment items, with a record of their use in exams and item analysis data following their use in exams. ʶʶ The ability to store and incorporate images in items. ʶʶ Import and export features, including delivery of written examinations online using Question Mark Perception software. Overall the AIB is expected to improve the quality of assessments by ensuring item quality and efficient delivery of exams. By tagging all assessment items and delivering examinations using digital technology, the system can also provide more reliable data on students’ performance in exams, allowing for improved feedback and tracking. The AIB is being introduced for the Exercise Physiology program in 2013 and medical science courses are also interested in exploring its capabilities.

Use of iPads in observable assessments Within the Medicine and Exercise Physiology programs, there are a series of assessments that require examiners to directly question or observe students completing various tasks. These examinations are held in various settings, ranging from facilities on the university campus to clinical examination rooms in hospitals. A student’s

The Faculty of Medicine 219

ImprovingAssessmentText2Proof.indd 219

11/11/13 3:36 PM

performance in these examinations is judged by an examiner and recorded on an appropriate grading sheet. This project sought to develop application software for iPads, which could be used to administer these exams and record and report students’ results. The use of iPads should provide: ʶʶ More efficient and reliable delivery of information about the examination. The specific requirements of the examination will be imported by the iPad from the assessment item bank. This would include all essential information for the administrative staff, examiners and students and any resources required for the examination (e.g. images). ʶʶ Improvement in the validity of the examinations by ensuring that all sites use the same information relating to the examination, that all examiners have standardised grading sheets which include guidance on grading, and ensuring completeness and validation of results entered by examiners to negate the risk of transcription errors. ʶʶ Improved efficiency in reporting results to students, as grades entered into the iPad are immediately uploaded to the faculty’s curriculum management system for reporting to students. ʶʶ More efficient and useful feedback to students. Examiners can select from predetermined comments or enter free text. Reports from the examinations, including specific feedback from examiners, can be readily generated for the students. The application software has been developed and tested in several clinical skills examinations. Examiner feedback was very favourable, with immediate improvement in the feedback provided to students.

Assessment of the graduate capability of teamwork Teamwork is an important graduate attribute of the Medicine program and is assessed throughout the program. This project sought

220 Part II

ImprovingAssessmentText2Proof.indd 220

11/11/13 3:36 PM

to review the learning and assessment activities relating to teamwork for each phase of the program. The project was undertaken with input from current students, who were provided with a framework to help them understand the relevance of teamwork skills for their current learning and future practice. The primary focus of teamwork in Phase 1 (years 1 and 2) is the students’ engagement in small group teaching sessions and the process of working together for group project assessments. A tool to assess teamwork skills in collaborative learning in small groups was developed and is being trialled. The tool will provide a mechanism for peers to provide feedback and for students to self-assess. Teamwork is one of the principal graduate capabilities assessed during clinical clerkships in Phase 3 (years 5 and 6). Currently, supervisors assess students based on feedback from members of the clinical team. A new instrument (T-MEX) for assessing students’ teamwork skills in clinical clerkships was developed, which is based on observing students in common interactions with other members of the clinical team. The T-MEX instrument is currently being trialled at some clinical sites. Students will be encouraged to use this instrument, reflect on feedback, develop relevant skills and repeat the cycle as many times as possible. Following completion of the Assessment Project, the focus on assessment practices in the undergraduate programs is being maintained, with attention focused on benchmarking our practices with other Australian and international medical schools. The development of standards-based assessment for all graduate capabilities remains a major goal.

The postgraduate assessment project Approximately 70 per cent of postgraduate coursework students at the Faculty of Medicine are enrolled in three programs delivered by the School Public Health and Community Medicine (SPHCM): the Master of Public Health (MPH), the Master of Health

The Faculty of Medicine 221

ImprovingAssessmentText2Proof.indd 221

11/11/13 3:36 PM

Management (MHM) and the Master of International Public Health (MIPH). Due to the diversity of students’ locations and employment patterns, many of the postgraduate courses are offered in internal and external mode with strong integration of online learning and assessment. Local students usually study part-time, including by distance, while international students study full-time on campus. In 2007, in response to emerging global and local challenges in health workforce needs, and as part of the school’s regular curriculum quality-assurance processes, the MPH and MHM programs were extensively reviewed in consultation with industry stakeholders and academic staff. The programs were refined as outcomes-based qualifications to meet newly agreed specific graduate professional capability statements. The two masters programs were re-designed as parallel programs, underpinned by a shared approach with a common core course and commitment to professionally focused practice-based authentic learning and assessment (Herrington and Herrington, 2005). Subsequently, the two programs were offered as a dual degree. In 2010 the MIPH was introduced and offered as a dual masters program with either the MHM or the MPH. These changes attracted significant growth in postgraduate enrolments. Although welcome, this created some challenges in relation to the increased number and diversity of students within courses, while needing to ensure the overall coherence of the curriculum across the single and dual masters programs. The university Assessment Project provided an opportunity for the school to review the consistency of assessment practices across its masters programs, with a particular focus on the principles of parity, diversity and sustainability, underpinned by constructive alignment. The project also provided an opportunity to determine the extent to which the school’s commitment to authentic assessments, grounded in the dynamic complexities of real-world professional dilemmas and practice, was sufficiently evident within and across the postgraduate programs.

222 Part II

ImprovingAssessmentText2Proof.indd 222

11/11/13 3:36 PM

Review of assessment in postgraduate coursework programs in the SPHCM

The specific objectives of the project were: 1 To determine whether current assessment methods and criteria within and across courses taught in the MPH, MHM and MIPH programs were constructively aligned to graduate capabilities and course learning outcomes; were equitable in terms of weighting and difficulty; were sufficiently diverse and efficiently designed and implemented. 2 To identify whether the assessment methods were equitable for internal and external students and to determine whether and how online components supported or hindered assessment processes. 3 To evaluate the assessment methods to ensure that they were underpinned by principles of authentic and sustainable assessment, and collectively within and across the programs were weighted to authentic professional practice-based assessments. 4 To identify opportunities to enhance assessment practices that would ensure more effective and efficient assessment methods while considering the needs of both postgraduate students and academic staff. To address these objectives, the project team established an agreed set of evaluation criteria against which to determine the effectiveness of the assessment methods and processes in all courses within the masters programs. This was conducted through a desk review in consultation with academic staff to verify data accuracy. An online student survey (n=72) was also conducted which focused on student perceptions of and levels of satisfaction with assessment strategies and feedback across the postgraduate programs in both internal and external delivery modes. This method complemented data already distilled from focus groups that are conducted with students at the end of each semester. The findings of the desk

The Faculty of Medicine 223

ImprovingAssessmentText2Proof.indd 223

11/11/13 3:36 PM

review, student survey and summary focus group data were used to inform future assessment practices across programs. The key findings are summarised below.

Constructive alignment ʶʶ A ll postgraduate SPHCM courses had a high level of constructive alignment between their learning outcomes and assessment methods. ʶʶ All course assessments included assessment criteria that were well aligned to the purpose and level of the assessment.

Diversity, parity and flexibility ʶʶ Th ere is a high level of equitable assessment weighting and difficulty across courses, and similar weighting for similar types of assessment tasks across all courses in the MHM, MPH and MIPH programs. ʶʶ A diverse range of assessment methods was found within and across the masters programs. Most assessments are focused on professional practice tasks that are grounded in real-world issues and contexts rather than assessments that are theoretically focused or that foster underpinning cognitive understanding and skills (see examples in box). Most assessments are at an appropriate level for a masters qualification and are relevant to supporting current and/or future practice, given the strong weighting to professional practice-based assessment (see Figure 10.2 – which is a compilation of figures 10.2.1–10.2.4). ʶʶ There is a large degree of flexibility of assessment tasks to meet the specific needs of public health and health management students in shared core courses in the MPH and the MHM programs. Core courses that are common to the newer MIPH program and the MPH require some further consideration to ensure that the specific needs of MIPH students are met.

224 Part II

ImprovingAssessmentText2Proof.indd 224

11/11/13 3:36 PM

Figure 10.2  Mapping assessments in Master of Public Health (MPH), Master of Health Management (MHM) and Master of International Public Health (MIPH) programs 10.2.1  Distribution of assessment tasks by program and course type (MPH)

Exam 16%

Test/quiz 7% Participation 3%

Professional practice-based assessment 67%

Group assessment 7%

10.2.2  Distribution of assessment tasks by program and   course type (electives) Test/quiz 1%

Essay/extended writing 5%

Participation 9% Group assessment 10%

Professional practice-based assessment 75%

The Faculty of Medicine 225

ImprovingAssessmentText2Proof.indd 225

11/11/13 3:36 PM

10.2.3  Distribution of assessment tasks by program and   course type (MIPH)

Exam 17% Test/quiz 7%

Participation 10%

Professional practice-based assessment 60%

Group assessment 6%

10.2.4  Distribution of assessment tasks by program and   course type (MHM) Essay/extended writing 5% Participation 5%

Reflective journal 3%

Group assessment 8%

Professional practice-based assessment 79%

ʶʶ Th ere is strong parity in the design of assessments for the internal and external students when offered within the one course and across courses. However, for some courses, external students require greater feedback and increased online support from academic staff to promote a greater sense of parity and satisfaction compared with internal students.

226 Part II

ImprovingAssessmentText2Proof.indd 226

11/11/13 3:36 PM

Some examples of professional practice-based assessments within the SPHCM masters programs Example 1: Policy Studies* Major assignment: Policy appraisal report (2000 words) Students prepare a critical appraisal report of a specific health policy that has been enacted or a policy issue that is currently under debate. The report is written from the perspective of a policy analyst presenting a detailed critical analysis to a policy review committee. To address the professional focus of the task, the report includes the scope of the policy problem, the policy context and dynamics in its development, the evidence base drawing on scholarly and grey literature, the key strengths and limitations in the policy’s design, and a clear set of recommendations framed succinctly to inform the policy review committee. Example 2: Health Leadership and Workforce Management* Major assignment: Workforce management action plan (2500 words) Students undertake a detailed analysis of a workforce or staff issue affecting the productivity and outcomes in a health service organisation (real or case study) and develop a proposed health management action plan to address the issue. The action plan includes the background organisational context, critical analysis of the issue and its impact, proposed options and recommended strategies for addressing the issue, clearly integrating current scholarly health management literature to ensure an evidence-informed argument and a well reasoned proposed action plan. Example 3: Applied Research Methods in Public Health* Three assignments: Literature review, research statement and research proposal Students complete three staged interlinked assessment tasks, with feedback provided at each stage, where the final objective is to develop a research proposal that could be included in a research grant funding application. The first assignment requires students to conduct a

The Faculty of Medicine 227

ImprovingAssessmentText2Proof.indd 227

11/11/13 3:36 PM

literature search on a public health issue of their choice, select and review 10 refereed papers, and write a two-page narrative summarising the major themes and identifying a gap in the literature. In the second assignment, students write a one-page research statement that re-frames the research gap identified in their first assignment to research questions or objectives with supporting arguments. This forms the bridge to the final assignment, which is to write a detailed (3000 word) research proposal that includes an introduction to the topic, research aims, brief literature review, methodology, timeline, stakeholder participation, ethics approval requirements and a statement on the importance of the research. * Courtesy of Rohan Jayasuriya, Joanne Travaglia and Lois Meyer, and Heather Worth, respectively.

Authentic and sustainable assessment The weighting of assessments across all core courses and electives within the masters programs demonstrated a strong emphasis on authentic and sustainable assessment, with the majority of assessments in each program being over two-thirds professional practice-based assessments. Core courses in the three programs each demonstrated a high emphasis on professional practice-based assessments. Eighteen electives across the MPH, MHM and MIPH program were mapped and analysed and 75 per cent were identified as professional practice-based tasks, again affirming the strong emphasis on authentic real-world assessments across the programs (see Figure 10.2).

Efficiency ʶʶ Assessment tasks are designed for efficiency and sufficiency to meet student and academic needs for valid and reliable assessment; however, a small number of courses require further improvement to ensure more efficient design and

228 Part II

ImprovingAssessmentText2Proof.indd 228

11/11/13 3:36 PM

implementation. ʶʶ There are significant differences in class sizes for core and elective courses and this requires different approaches to assessment in terms of efficiency of design and student support, particularly in the courses that teach quantitative methods. The core courses with high enrolments demonstrated a range of innovative approaches in seeking to address the challenge of a high number of both internal and external and students and with cohorts enrolled in different masters programs. The online survey was completed by 72 students, who were representative of students enrolled across all programs. A strong level of overall satisfaction was identified and assessments were perceived as fostering critical thinking, conceptual understanding and reflective practice. However, feedback to students about their performance in assessments was identified as an area for improvement to enhance the quality of the student experience across internal and external delivery modes (see Figure 10.3). Figure 10.3  Student survey (n=72) on assessment practices and feedback, School of Public Health and Community Medicine 100 Strongly agree Moderately agree

Per cent

80 60 40 20 0

Clear requirements & expectations

Demonstrate critical thinking & reflect on learning

Consistent approach to feedback across courses

Feedback constrcutive, timely & meaningful

The Faculty of Medicine 229

ImprovingAssessmentText2Proof.indd 229

11/11/13 3:36 PM

Review recommendations and outcomes Recommendations and outcomes from the review were: 1 Integration of assessment rubrics to assist in providing more efficient and effective assessment feedback to students. This was qualified by the requirement that rubrics be appropriately designed to provide meaningful feedback with sufficient flexibility and depth to allow students to know how well they had done against the required standards for achievement and to provide clear feedback for areas of improvement to support ongoing learning. In response to the assessment recommendations, the software package Turnitin has been implemented for all SPHCM postgraduate courses, allowing the use of GradeMark and online rubrics for improved efficiency and effectiveness in assessment processes and provision of feedback. These were trialled successfully in S1, 2012 in a large non-quantitative core course of approximately 300 students where assessment tasks needed to be marked for written responses. GradeMark and online rubrics are now being implemented across a number of postgraduate courses, with support from the educational design team within the school. Processes are being developed to evaluate the impact of these assessment practices and feedback mechanisms on staff and student experiences. 2 Continued use of student evaluation focus groups for each course at the end of each semester as an important adjunct to the UNSW-wide CATEI course evaluation process. This ensures that issues of concern and course quality can be monitored using a range of course evaluation methods that provide meaningful data on assessment needs and strategies. 3 Ongoing use of formal school forums where academic staff can share issues, approaches and understandings on assessment was seen as important. A new postgraduate learning and teaching discussion forum for academic staff

230 Part II

ImprovingAssessmentText2Proof.indd 230

11/11/13 3:36 PM

has been implemented and is well attended, with positive feedback from staff via formal and informal feedback processes. 4 Sustained innovative and effective learning, teaching and assessment practices by academic staff within the school through informal sharing of experiences, support from the educational design team, access to online professional development resources and encouragement of open dialogue. Although the formal Assessment Project focused on the three largest postgraduate programs in the faculty, there has been a flow-on effect of the project to assessment practices across all postgraduate programs. There is now a greater awareness of the need for authentic, sustainable and efficient assessment among academic staff.

Promotion of scholarship in assessment Over the period of the Assessment Project a range of scholarly activities were held to improve the understanding of assessment and to provide a means for sharing good practices. Prior to the commencement of the project, the university had funded a Learning and Teaching Fellow in each faculty. The roles of the Fellows were to promote the scholarship of teaching and learning (SOTL) and to provide practical support to teaching staff seeking to improve their teaching quality. The L&T Fellows provided a means for inter-faculty collaboration and liaison with the university Learning and Teaching Unit. The Medicine L&T Fellow (Rachel Thompson) was essential in identifying key issues to be addressed in the project and in liaising with the other faculties to identify and overcome issues or barriers. Early in the period of the project, the Medicine L&T Fellow instigated staff development activities to improve understanding of assessment, to provide a means for sharing good practice, and to

The Faculty of Medicine 231

ImprovingAssessmentText2Proof.indd 231

11/11/13 3:36 PM

encourage an increased scholarly approach to assessment practice in the faculty. To support current teaching staff, the L&T Fellow organised seminars on curriculum development and good practice in assessment and providing effective feedback. These activities included seminars focusing on relevant topics in assessment including identifying key issues in assessment, the assessment of professionalism in medical students, how to provide effective feedback in the clinical setting, peer assessment and item response theory in assessment. Various forums on assessment were organised within the faculty including a joint meeting with the Science Faculty in 2011. The joint forum showcased best and innovative assessment practice and had a very positive evaluation from attendees. The faculty also held and has maintained regular workshops on writing multiple-choice questions and interpreting item analyses and workshops on using the assessment item bank following its release. An induction program for new teaching staff provides a basic introduction to student learning and introduces the concepts of reflective practice, stressing the importance of providing effective feedback to students. Following the completion of the project, the L&T Fellow has continued to support staff in assessment with the development of an online portal providing evidence-based advice and support for staff on assessment, specifically targeting new teaching staff and clinical assessors. The faculty has also sought to improve its services to staff in the evaluation of assessment practices. The expertise of staff within the faculty has been increased through the appointment of a senior academic with an extensive experience in quantitative analyses of assessment and improving expertise of current staff through external professional development activities.

232 Part II

ImprovingAssessmentText2Proof.indd 232

11/11/13 3:36 PM

Conclusions The university Assessment Project provided the impetus and opportunity to review, evaluate and develop assessment practices within the Faculty of Medicine. The project was undertaken in an environment in which there was already a strong focus on assessment quality, although the quality of assessment methods was uneven across the faculty’s programs. The faculty’s approach focused on the needs of specific teaching programs, though all of the faculty’s schools are engaged in the undergraduate programs. The approach resulted in multiple projects where the common outcome for the faculty was a focus on scholarly activities to promote a greater understanding of assessment principles and practice. Compared to where they were prior to the project, faculty staff have a clearer understanding of assessment practices in all programs. The Assessment Project resulted in significant improvements in assessment methods employed in both undergraduate and postgraduate programs, and an overall greater appreciation of the principles of and good practices in assessment. New developments, including greater use of technology in assessment, are also expected to lead to further improvements in the quality and efficiency of assessment.

References Biggs, J & Tang, C (2011) Teaching for Quality Learning at University, McGrawHill and Open University Press, Maidenhead, UK. Herrington, T & Herrington, J (eds) (2005) Authentic Learning Environments in Higher Education, Information Science Publishing, London.

Acknowledgments Thank you to the academic, professional and technical staff who engaged and assisted with the Assessment Project. For the postgraduate Assessment Project, we acknowledge the invaluable contributions of Alan Hodgkinson, Sonal Bhalla and Jan Orrell.

The Faculty of Medicine 233

ImprovingAssessmentText2Proof.indd 233

11/11/13 3:36 PM

11

The Faculty of Science: The challenge of diverse disciplines Julian M Cox, Louise Lutze-Mann, Jane Paton and Iona Reid

The Faculty of Science is a moderately sized faculty within UNSW; nationally, it is similar in size to many of its peers. It is a diverse faculty, with nine schools encompassing fundamental sciences such as mathematics, chemistry and physics, through the life sciences (biology, microbiology and biochemistry), human sciences (psychology) and environmental sciences (geology and geography) to professional areas such as optometry, aviation and material sciences and engineering. Thus, it is almost self-evident that the portfolio of approaches to assessment within the faculty is extremely diverse, from quantitative to highly descriptive, from esoteric to professionally relevant and constrained. Science is the research engine of UNSW, with faculty inputs and outputs (notwithstanding contributions by affiliates) being higher than those of any other faculty. Attaining and maintaining this position drives research as the primary focus among academic staff, through individual, school and faculty imperatives. However, despite the research focus, students rate teaching within the faculty very highly.

234

ImprovingAssessmentText2Proof.indd 234

11/11/13 3:36 PM

The faculty’s response and approach to the Assessment Project The faculty’s approach to the Assessment Project developed somewhat organically, given a general perception of a lack of clarity in the institution’s desired outcomes for the project. Initially, the faculty’s efforts were driven by the project’s broad terms of reference as articulated by Professor Jan Orrell, the external assessment consultant. Based on Professor Orrell’s initial input, the faculty developed the belief that the focus of the project was primarily on improving the quality of assessment and to a lesser extent on efficiency. However, very early on in the project, based on statements made by, or attributed to, senior university management, the faculty developed a sense that the true goal and desired outcome of the project was gains in efficiency in assessment. It was understood that senior university management wanted to reduce the workload associated with teaching and learning in order to provide academic staff with more time for research activities. There was also a varying level of discomfort among the small team responsible for the project, as not only the desired outcome, but also the target associated with that outcome, was unclear. That is, there were concerns that a disjunction existed between the perceived and real priority outcomes of the project (quality versus efficiency). Further, if efficiency was the major driver, what was to be the desired extent of efficiency gains? A series of discussions in various governance forums, such as the faculty education and standing committees, ultimately resulted in the Dean articulating a highly aspirational target of a 25 per cent reduction in assessment workload while remaining cognisant of the need to maintain integrity in assessment. It is interesting that, as will be evident later in this chapter, the 25 per cent reduction was taken quite literally by some parts of the faculty and as a very broad guideline by others. At the same time, it was agreed that efficiency could not be bought at the expense of quality.

The Faculty of Science 235

ImprovingAssessmentText2Proof.indd 235

11/11/13 3:36 PM

Through this lack of clarity it became clear that the shape of the project was to be determined by each faculty, appropriate to the quite devolved governance model that continues to prevail at UNSW. Academic staff within the Faculty of Science appear to operate more positively under an open, bottom-up management structure, with support – rather than heavily imposed governance – from above. However, it is highly unlikely that a project driven from the bottom up would have ever gained traction, while the broadest of imperatives from the top, along with the freedom to create shape, offered by the Assessment Project afforded the opportunity to identify and develop good practice in assessment and to formalise and socialise such practice. The balance between top-down imperatives and bottom-up shape was further reinforced within the faculty throughout the life of the project. In particular, the need to engage in the project, regardless of its ultimate outcomes, was reiterated in numerous formal governance forums (the faculty’s Education, Standing and Executive committees). It was made clear in those forums that improving the efficiency and effectiveness of assessment in Science was one of the Dean’s key performance targets, thereby highlighting the significance afforded the project by senior institutional management, including the Vice-Chancellor. Thus, beyond the broad aims and goals of the project, additional specific objectives or focuses were included, such as assessment in laboratory classes – in particular, determination of the extent to which explicit and direct assessment of actual practical skills (as opposed to assessment of knowledge of, or reports on, practical exercises) was being currently undertaken. These additional objectives reflected the interests of the core project team, in particular the Learning and Teaching Fellow (LTF), who was assigned primary responsibility for oversight of the project, and the Associate Dean (Education) (ADE). As the project proceeded, it and the focuses evolved; this evolution was especially notable with the loss of the LTF position. In fact, with appointment of a project

236 Part II

ImprovingAssessmentText2Proof.indd 236

11/11/13 3:36 PM

officer and the loss of the LTF the project became broader and focuses emerged through developments in the assessment portfolios in various schools rather than being driven by the project team, as will be seen later in the chapter. The team agreed that, regardless of the ultimate direction and focuses within the project, the existing assessment landscape had to be determined. To obtain the baseline data on the assessment portfolio in the faculty, outlines for all active undergraduate and postgraduate courses were collected from each school in the faculty (currently nine, with inclusion of some centres that offer courses). This stage of the process ultimately engaged the majority of academic staff members across the faculty, with Heads of School asked to promote the project to their respective staff, as well as those involved in governance (e.g. members of the Faculty of Science Education Committee) being encouraged to drive the supply of information to the project team. By definition, the best sources of information on assessment tasks within a given course are those individual staff members responsible for them. This requires something of a shift in thinking, a willingness to make public what is traditionally seen as a very private process. Even in multi-lecturer courses, conversations around the cohesion of teaching and learning activities, not to mention assessment tasks, appear to happen less often than should be considered desirable, if not necessary. The assessment tasks from each course outline were collated into spreadsheets showing the breakdown, by school, of aspects of assessment such as group work, tutorial or weekly tasks, mid-session exams, final exams, written tasks, presentations, practicals (including pre-work, lab books, reports, field trips and practical exams), other assessment, week of first submission, total number of assessments and number of assessments worth at least 10 per cent. Based on data from one school, a report was drafted in consultation with the external consultant, describing and critiquing the profile of assessment tasks. This report and the subsequent process served as a pilot for this stage of the project for all schools.

The Faculty of Science 237

ImprovingAssessmentText2Proof.indd 237

11/11/13 3:36 PM

The report was forwarded to the school for circulation, verification of the assessment portfolio and comment. At this point, the project team felt a certain resistance to the process, with errors in the report seen as the fault of the team, rather than as a lack of clarity or a simple misunderstanding. While this negativity could have proved problematic, the process was also seen as affording individual academic staff the opportunity (whether taken or not) to reflect on what they actually did in assessment, which was viewed as a positive by the project team. Also it highlighted the need for very clear and carefully crafted communication to ensure not only that the information gathered accurately represented the assessment profile for each school but also that staff engaged strongly and positively with the project team in each school. After correction of the report, based on feedback from the school, the project team met with staff members from the school. The meeting with the first school involved the Deputy Head and a good range of key teaching personnel, Professor Orrell and all members of the Science Learning and Teaching Unit (SLTU), the ADE, the Manager, the Learning and Teaching Fellow and the Project Officer. This first meeting was quite instructive, as it appeared that only those staff members with either a governance role (e.g. Head or Director of Learning and Teaching) or a strong interest or role in learning and teaching were in attendance. While formal communication included the Head of School, the responsiveness of heads to processes such as this can be variable. The team also engaged known key staff members in each school who could in turn engage appropriate staff in the process. In some cases this staff member was a true change agent or champion for learning and teaching, while in others the staff member served simply as a conduit to other staff. Despite the presence of staff who were most likely to engage with the process, there was a sense of tension at the meeting, with concerns about the work to bring about change, and probably a certain amount of resentment. The existence of the project itself

238 Part II

ImprovingAssessmentText2Proof.indd 238

11/11/13 3:36 PM

suggested that the faculty’s practice in assessment was somehow wrong and needed to be fixed. Further, there was still, at least at the outset, a sense of the project being a top-down process. However, staff members were eventually convinced that the true purpose of the team was to maintain quality, reducing workload, particularly for staff, while affording students a rich assessment experience. There followed an enhanced capacity to engage an increasing proportion of staff in conversations around assessment. Having a neutral and expert third party in the form of the external consultant proved very positive throughout the early stages of the project. Professor Orrell provided a sense of truth to the purpose of the project not only at the institutional level when engaging with the ADEs, but also at the faculty level, and not only with the project team but ultimately with all of the faculty, helping us understand the connections between quality, efficiency and the maintenance of integrity in assessment. She also provided useful input to the team’s view of what the project should be and then supported that view at faculty level. Perhaps most importantly, she lent additional credibility to the school audit and reporting process, not only through the knowledge she had contributed to the development of the school report but also through her critique during meetings, given that she came with recognised high-level expertise but no vested interest. All that said, once the project team felt comfortable with the audit-report-meeting process through its development with one school, the team had the confidence to proceed with the remaining schools without Professor Orrell’s direct input. This comfort came with having accomplished and active teachers as the major part of the team; teachers who had a particular interest in assessment, combined with the capacity to leverage that expertise with analysis of the assessment portfolio in other schools. Further, the benefit of distance afforded by Professor Orrell’s initial engagement was offset by the familiarity with the faculty context that the members of the project team brought to the task. Each school was encouraged to use the data and reports to drive

The Faculty of Science 239

ImprovingAssessmentText2Proof.indd 239

11/11/13 3:36 PM

any changes that could be accommodated in terms of quality and/ or efficiency. Throughout 2012, to follow up on any changes, a spreadsheet was sent to all undergraduate course coordinators, with a request to record changes in assessment practices from 2009 to 2012, aligning any changes with efficiency and quality metrics, as articulated by Professor Orrell. While strictly speaking the project commenced only in 2010, and in earnest in 2011, it was clear from the school meetings that a few staff had made quite substantial changes to assessment practices prior to 2010, so the 2009–12 window was used to capture all recent changes to assessment. There was some tension around this updating process, primarily because it increased workload and felt repetitive. While some academic staff members were happy to furnish the information, others were quite reluctant and resentful, wondering why they had to keep providing information (even though reasons had been made clear in formal communications). In addition, some staff saw (re)evaluating their assessment tasks with respect to quality and efficiency metrics as arduous, despite the provision and clear definition of those metrics. Overall, the identification and engagement of key staff in the earlier phase of the project made this update process much easier than the initial data collection had been. It is likely that the staff who engaged best came to a realisation that there was merit and value in the project in that it gave them a better understanding of their own practice, as well as that of others. To provide a broad perspective on the data, and a narrative on learning and teaching as well as assessment, the Project Officer, Dr Jane Paton, met with each Head of School in 2012. While it was clear that the project captured the staff perspective on assessment, it was considered critical that the student voice was also captured. In 2012, this was done using both focus groups and an online survey. Focus group sessions were designed by SLTU staff (ADE, Manager, Project Officer) and conducted by the SLTU Manager and the Project Officer. The sessions engaged students across schools, programs and stages or years of study (Year 1; years 2, 3 and 4; students from

240 Part II

ImprovingAssessmentText2Proof.indd 240

11/11/13 3:36 PM

lab-based courses, international students, postgraduate coursework students and higher degree research students). We were interested in how students from specific groups viewed assessment in their learning, particularly when transition was involved. This included how Year 1 students viewed assessment against the background of recent secondary school assessment and how assessment in the undergraduate curriculum had equipped students for honours and for research. Following a preliminary analysis of focus group outcomes, an online survey was developed and administered to students across the Faculty of Science. This covered a broad range of concepts around assessment, unpacking in finer detail the major issues identified through the focus groups. Through his role, the ADE used the Assessment Project and presentations and discussions in various forums as a driver to innovate and collaborate with Engineering and Medicine on technology-enabled assessment. Using faculty-level Science Foundation (SCIF) courses, among others, the aims were to enhance the use of peer review through the Moodle LMS’s Workshop tool, to introduce or enhance reflective practice and a greater sense of professional ‘self ’ though use of an e-portfolio, and to incorporate early internationalisation of the curriculum through classroom and ‘homework’ activities relating to global perspectives on science. These served more generally, at faculty level, as examples of innovation in assessment, particularly as examples exploiting technology. Finally, the network of ADEs across UNSW, meeting alone or with the DLT and the Deputy Vice-Chancellor (Academic) (DVCA), contributed at least indirectly to the outcomes of the project. Informal monthly meetings of the ADEs provided a degree of support throughout the project. Indeed, the ADEs as a group possess a spirit of collegiality perhaps unmatched within UNSW. There is never a sense of competition but one of sharing. Healthy academic debate is always conducted in an atmosphere of mutual respect, despite differences as individuals and among faculties. Such

The Faculty of Science 241

ImprovingAssessmentText2Proof.indd 241

11/11/13 3:36 PM

conditions engender deeper levels of engagement and industry in tasks set for or by the group than might be expected, given the total workload managed by each ADE. The regular meetings with the DLT and the DVCA also proved supportive, in that the first meeting set a very positive tone, not only with respect to acknowledgment of the progress being made in all faculties but also in celebrating the diversity of different faculties’ achievements. While some issues seem peculiar to specific faculties, it became clear that many issues are shared. It is clear, from the operation of this group, that opportunities will arise to share these problems and solutions. Whether the freedom afforded the ADEs by the DVCA in particular was deliberate or not is interesting to ponder, but regardless the outcome was very positive.

Outputs and outcomes of the Assessment Project Databases and reports

As both an output and outcome, the faculty now holds a comprehensive database of science assessment practices for both undergraduate and postgraduate programs. Information on assessment collected early in the project was used to generate a poster, ‘Assessment in the Faculty of Science’, which was presented at the Learning and Teaching Forum on 16 September 2011. The poster highlighted the rich diversity of assessment tasks across the faculty. The database now incorporates data sets beyond the original collection, including the 2012 data collection. The faculty now has a spreadsheet describing assessment tasks in undergraduate courses offered by all schools, with a wide range of quality and efficiency metrics. Further, course-by-course changes in assessment have been collected and collated from surveys of practice in 2009–10 and 2012. While these data show that some schools made considerable changes to their assessment practices just prior to and during the project, other schools still have the potential to enhance theirs. From the initial report crafted to commence the conversation

242 Part II

ImprovingAssessmentText2Proof.indd 242

11/11/13 3:36 PM

on assessment, a final report has been compiled for each school outlining the processes and initiatives taken by the school prior to and during the Assessment Project and proposed for the future. Each report includes examples of innovation in assessment and teaching. A draft executive summary of the project provides insights into the status and highlights of assessment in each of the schools in the Faculty of Science, as noted below.

Aviation While there is no formal accreditation process in Aviation, assessment in a number of courses is dictated by legislation under the Civil Aviation Safety Authority, so there are limited options for changing assessment practices in those courses. Much of the assessment in the professional programs is informed by industry and is quite rich and authentic. Recent discussions, concurrent with the move from the Blackboard to the Moodle LMS, will result in increased and richer use of technology. Of 19 courses, assessment has changed in nine (45 per cent) since 2009–10.

Biotechnology and Biomolecular Sciences (BABS) Staff members in BABS have been proactive in improving the efficiency of laboratory classes and this includes the incorporation of highly innovative virtual experiments. Geoff Kornfield, a Professional Officer in BABS, now spends 50 per cent of his time in teaching development. Along with Louise Lutze-Mann and others he has been developing virtual exercises across a range of disciplines on the adaptive eLearning platform (AeLP). The school also has a very active and comprehensive teaching and learning governance process, with not only a teaching and learning committee that meets regularly, but also a teaching quality committee that is more focused on innovation in teaching and learning, including assessment. Of 34 courses reviewed, assessment has changed in 28 (82 per cent) since 2009–10.

The Faculty of Science 243

ImprovingAssessmentText2Proof.indd 243

11/11/13 3:36 PM

Biological, Earth and Environmental Sciences (BEES) This school has looked to reduce assessment, concluding that the best approach was a reduction in the number of courses, especially small courses or those with excessive overlap. This resulted in a 25 per cent reduction in the number of courses (from 73 to 55), with a corresponding 25 per cent reduction in assessment tasks. The net result of fewer courses and less assessment has been a reduction in the cost of casual teaching (mainly laboratory demonstrations) from more than $420k in 2009 to less than $350k in 2012. There has also been a reduction in the professional and technical staff required for course support, allowing reallocation of those resources to OH&S compliance and research support. Of 61 courses, assessment has changed in 49 (78 per cent) since 2009–10.

Chemistry In response to decreases in both teaching space and the number of academic staff available to teach (many staff being on research-only fellowships), three major changes to teaching, particularly at the first year level, were introduced in the School of Chemistry. From 2009, academic supervisors were removed from all Year 1 laboratory courses, and were replaced by casual but senior and experienced lab supervisors. In 2011, the school ran 16 Year 1 labs in S1 and 17 labs in S2, with the use of the casual supervisors saving 363 academic contact hours. A single large lecture class for all Year 1 courses was introduced, saving 72 hours of lectures. All level 2 and 3 courses are now run only once a year, saving another 36 hours of lectures. Honours coursework is now integrated with CHEM3201, saving 30 hours of lectures each year. These changes do not include time for lecture preparation and for setting and marking assessment tasks, including examinations. Overall, these initiatives are estimated to have saved more than 500 academic teaching hours per annum, a saving of approximately $150,000. The implementation of guidelines for assessment commenced in 2012 and the predicted outcomes include:

244 Part II

ImprovingAssessmentText2Proof.indd 244

11/11/13 3:36 PM

ʶʶ fewer student assessments ʶʶ increased use of computer-assisted teaching and assessment methodologies, providing immediate feedback to students and reducing academic time spent on assessment ʶʶ increased use of short answer templates for assignments and reports, streamlining marking time for academics and providing quicker feedback to students (a reasonable estimate is a 50–75 per cent reduction in the time students take to complete such assessments and a 50–80 per cent reduction in marking time for the academic marker) ʶʶ increased use of template answer sheets for final examinations ʶʶ the introduction of self-knowledge tests for students, including the use of online knowledge testing obtained from textbook suppliers, resulting in improved student awareness of what they understand ʶʶ marking of the majority of laboratory reports in the laboratory. In 2013, it is estimated that 800–1200 academic hours will be freed by these changes alone, resulting in a saving of $56,000. Of 31 courses, assessment has changed in 29 (94 per cent) since 2009–10.

Materials Science and Engineering (MSE) In 2010, MSE undertook a complete revision of Years 2–4 of its Bachelor of Engineering programs. As a result of the revision, significant changes were made to the programs, in particular the removal of courses comprising three units of credit (3UoC courses) and a consequent decrease in the number of courses offered. Prior to 2011, Program 3135 had a total of 46 courses; the new program is comprised of 24 courses. An aspirational target of 25 per cent reduction in assessment was also set. A feature of the program revision is a 25–30 per cent reduction in assessment tasks across years 2–4 of the program. This reduction was realised in 2011 for Year 2 and in 2012 for Year 3, and will be realised in 2013 for Year 4.

The Faculty of Science 245

ImprovingAssessmentText2Proof.indd 245

11/11/13 3:36 PM

Mathematics and Statistics As with the School of MSE, Maths has removed 3UoC courses, consolidating some into 6UoC courses. This has created some problems, including assessment, marking and grading of students when courses have been combined rather than integrated. In 2009, the School of Mathematics and Statistics participated in a benchmarking exercise with mathematics schools from three other universities. Three Year 1 courses and a Year 2 course were evaluated. Variables assessed included face-to-face lecturing time, tutorials, practicals, number of assessments per session, hours to mark assessments and staff involved in assessment (i.e. academic versus casual). The general conclusion was that the school was very efficient, with little financial benefit likely to derive from alteration to teaching and assessment practices. Of 59 courses reviewed, assessment has changed in 20 (34 per cent) since 2009–10.

Optometry and Vision Science Since the major program in the School of Optometry and Vision Science is professionally accredited, curriculum, including assessment, is somewhat constrained. As part of its most recent accreditation process, the School of Optometry undertook a program review. The Program Review Project included identifying gaps and overlaps in program curriculums, and mapping course and program assessment tasks. The project identified areas where efficiency, congruency and alignment could be improved, leading to reduced staff workload. This helped program authorities and course coordinators to understand the overall pattern of learning in each program and to determine the contribution of each course and the adequacy of the curriculum to university-wide and program goals and professional competencies. Of 28 courses reviewed, assessment has changed in 24 (86 per cent) since 2009–10.

246 Part II

ImprovingAssessmentText2Proof.indd 246

11/11/13 3:36 PM

Physics Rising student numbers and falling staff numbers have necessitated efficiencies in the way Year 1 Physics assessments are conducted. The number of students in that year increased significantly throughout 2007–11, with 3443 students enrolled in semesters 1 and 2 in 2011, compared with 2139 students in 2007. However, the efficiencies introduced since 2007 have been effective in reducing the time spent marking and the associated marking costs. For example, in 2007 the average time spent on marking per student per course (PHYS1111, 1121, 1131, 1221 and 1231) ranged between 26 and 33 minutes. By 2009, this range had dropped to between 13 and 20 minutes, and in 2011 the range had fallen to between 10 and 17 minutes. So by 2011, there was a reduction of 50–60 per cent in the average marking time. The average marking time for PHYS1211 was determined from 2008–11, but this remained unchanged at 26 minutes over this time; however, in 2013, as PHYS1211 no longer includes a mid-session exam, the average marking time is expected to be approximately 10 minutes. A cost analysis of Year 1 marking over this same period was also carried out. Academics in Physics are assigned a certain number of marking load units (MLUs; 1 unit = marking a three-hour examination) and the total number of MLUs spent on marking Year 1 papers increased from 524 in 2007 to 1022.5 in 2011, partly owing to increasing enrolments. Casual staff members cover any shortfall in academic staffing required for marking. From 2009 to 2011 the amount spent on casual markers decreased from $10,993 to $6040 (data unavailable for 2007–08). Of 47 courses reviewed, assessment has changed in 12 (26 per cent) since 2009–10.

Psychology In 2010, the School of Psychology revised the ‘School of Psychology Guidelines for Assessment and Feedback’ approved in 2005. While these guidelines inform assessment practice locally, learning and teaching activities, including assessment, are largely constrained by

The Faculty of Science 247

ImprovingAssessmentText2Proof.indd 247

11/11/13 3:36 PM

the need to align with professional accreditation requirements. Of 23 courses reviewed, assessment has changed in 19 (83 per cent) since 2009–10. Student opinions

As outlined in the approach, the student voice was sought through both focus groups and online surveys. Data from the former were used to inform the latter. Broadly, the data from the focus groups are reflected in the data from the surveys. However, while the focus groups preceded the online surveys, due to the time needed to transcribe recordings from the focus groups these are yet to be reported, while the major findings from the surveys are provided below.

Areas of satisfaction Students expressed satisfaction at the consistency and equitableness of assessment and marking procedures within each course, agreed that assessments developed generalised and specialised laboratory skills (though there were questions as to the appropriateness of the assessment of these skills), and affirmed that assessments tested their: ʶʶ ability to think, analyse and solve problems ʶʶ knowledge of proper experimental design ʶʶ ability to work independently ʶʶ effectiveness as a team member ʶʶ ability to find, validate and synthesise information from a range of sources ʶʶ ability to continue their learning in the future ʶʶ development of relevant IT skills.

Areas of dissatisfaction Students expressed dissatisfaction with: ʶʶ the variability of skills between tutors or demonstrators with respect to provision of feedback, technical expertise and

248 Part II

ImprovingAssessmentText2Proof.indd 248

11/11/13 3:36 PM

ʶʶ ʶʶ ʶʶ ʶʶ ʶʶ

detailed explanations the failure to organise tasks to avoid competing submission dates the inconsistency of workload across courses the fact that not enough detailed feedback on assessment was provided inconsistency in the quality of the feedback unfairness in the marking of group work.

Aspects about which a large proportion of students were undecided As a group, students were undecided as to: ʶʶ consistency and equivalence of assessment and marking procedures within programs ʶʶ sufficient provision of cumulative marks by lecturers across the session ʶʶ whether the time and workload required to complete an assessment task was reflected by the weighting of the assessment task ʶʶ whether group work was used too much ʶʶ whether group work demanded too much out-of-class time ʶʶ self-selection for membership of groups ʶʶ whether group work had been a positive experience ʶʶ the importance of group work ʶʶ whether their interpersonal skills had improved through participating in group work ʶʶ whether they had learnt to write for a range of audiences ʶʶ whether multiple choice questions (MCQs) were a good way of measuring learning.

Other findings Students identified the top five most important features of assessment as (1) usefulness of feedback, (2) understanding of content, (3) number of marks, (4) amount of feedback and (5) timing.

The Faculty of Science 249

ImprovingAssessmentText2Proof.indd 249

11/11/13 3:36 PM

Development of skills and workload were close behind. Of lesser importance, though still rated quite highly, were additional depth of content, promptness of feedback and the ease of completing the task. Findings related to workload: ʶʶ Lecturers are not consistently discussing their expectations about the time and effort needed to complete each task (only 7.6 per cent of students chose ‘always’ as their response). ʶʶ Students are not against assessment tasks that do not have marks attached (only 10 per cent indicated they would never complete them). It is reasonable to suggest that engagement with such tasks would depend on time and prioritising these types of assessments with other assessments that have marks attached. ʶʶ The amount of time spent on an assessment task depends strongly on what other assessments are due at the same time (more so than the mark value of the assessments). ʶʶ Most students believe that the workload in a course is more or less as expected or higher than expected, with most accepting the average weekly workload for a typical 6UoC Science course as 8–10 hours per week. Findings related to feedback: ʶʶ Students tend to believe that more time should be spent in class for the provision of feedback. ʶʶ Students generally would only resubmit work that incorporated feedback if a mark were associated with it. ʶʶ The highest rating (rating ≥ 7) methods of providing feedback were (1) mark and comment, (2) receiving the exam paper and having a time allocation for individual consultation, (3) marking criteria or schemes, (4) receiving the exam paper and having a general discussion of class performance and problems, (5) online quizzes with links to lectures or textbooks for the correct answer, (6) online model answers and (7) in-class marking and discussion.

250 Part II

ImprovingAssessmentText2Proof.indd 250

11/11/13 3:36 PM

ʶʶ Less than half of the students indicated that they had ‘always’ received feedback on assessment tasks. ʶʶ Students do not actively seek out feedback if they fail to receive it; responses to an open-ended question on this aspect reflect some negative perceptions of lecturer behaviour and attitudes. ʶʶ Students will generally use feedback to modify the way they attempt future assessments and engage in critical review of their own work. With respect to group work, students believe that: ʶʶ it is more efficient if each student has a defined task (but whose responsibility is it to determine or manage this?) ʶʶ it should always include something like peer review ʶʶ finding time for group meetings outside of class time is a major issue ʶʶ it should not be used for major pieces of assessment – though responses to open-ended questions suggested quite the opposite ʶʶ the value of a group project is not always proportionate to the work required ʶʶ group work does not always allow study of a topic in greater depth. Findings related to context and relevance: ʶʶ Students agreed that their motivation to work on assessment tasks would be stronger if lecturers outlined the relevance of the tasks to their future professional career. ʶʶ Students can generally see how the content and skills in one course are related to other courses. ʶʶ Students believe that, to some extent, exams are too biased towards rote learning and recall, and that examination questions are often very specific and there is less emphasis on general principles and application. ʶʶ Honours students have found that the complexity of the assessment tasks and their relevance to their career increased

The Faculty of Science 251

ImprovingAssessmentText2Proof.indd 251

11/11/13 3:36 PM

ʶʶ

ʶʶ ʶʶ ʶʶ

ʶʶ ʶʶ

across the years, stages and levels of study and that there has been a good balance of different types of assessment tasks. They are slightly less sure that they are prepared for the next step in their professional career. Postgraduates do not see a clear differentiation between them and undergraduate students with respect to the types of assessment tasks used and assessment of their higher level of skills and experience (the majority were undecided). They did feel that there has been a good balance of different types of assessment tasks. General findings: Students strongly believe that marking schemes or criteria should always be provided with larger assessment tasks. Students believe that pre-work helps preparation, performance and understanding of content. Only a small percentage of students feel they are ‘always’ given the opportunity to practise tasks before submitting similar tasks of higher value (10.5 per cent) or have ‘always’ been able to access past exam papers (7.6 per cent). They do not always understand or accept the reasons for the latter. Students do not find online discussions an effective way of learning. Students believe that assessment makes them think more clearly.

Overall these student comments are invaluable for the faculty. There is clear evidence of heterogeneity of assessment across different courses. The general and the specific nature of the comments was consistent with the school data obtained and with anecdotal impressions. Innovation and leadership

During the Assessment Project, the ADE’s participation in the Moodle Pilot Project afforded the opportunity to transition peer

252 Part II

ImprovingAssessmentText2Proof.indd 252

11/11/13 3:36 PM

review from a combination of paper based and less stable online platforms to the Workshop (peer review) activity within Moodle. Further developments represent an interesting example of both a cycle of development and a community of practice, collectively quite symbiotic. Presentation of the use of the Workshop activity in one of the many learning and teaching groups (important in themselves in continuous improvement of learning and teaching practice) highlighted its benefits but also its limitations. In particular, the presentation described assessment tasks that incorporated peer review and potentially could be supported using the Moodle Workshop if its functionality could be extended. An Educational Technologist from Engineering took these concepts and drove the programming of the extensions. This was possible because Moodle is open source software, available for modification and located on a dedicated server, and because the Associate Dean (Academic) of Engineering was strongly supportive of technological innovation. Once the extension software had been developed, it was implemented in courses to test it live. It now exists as the ‘UNSW Workshop’ activity in UNSW Moodle, and the extensions are to be incorporated in the Moodle core. Without all the connections occurring, development of these extensions might not have happened. The extensions include support for team or group projects, the ‘calibration’ of reviewers against sample reviews and the automated upload of a specific set of reviewers using a .csv file. Further information on the UNSW enhancements to the Moodle Workshop tool can be found in our paper presented to the Ascilite conference in 2012 (Cox, Posada and Waldron, 2012). In addition, with LTU staff members acting as catalysts, staff across Science and Medicine have introduced e-portfolios as an assessment task across several courses, with a view to a true program-wide implementation from 2014 in the Bachelor of Medical Science. The aim of pilot projects in 2012 was to examine practice by individual staff members and the responsiveness of students. This informed practice for implementation in the program-wide

The Faculty of Science 253

ImprovingAssessmentText2Proof.indd 253

11/11/13 3:36 PM

project. At the same time, the focus of the portfolio – from broad professional and reflective processes around professional skills and graduate attributes and capabilities, to specific career readiness – will flow through to the program. The practice around these technological applications has been and is being widely disseminated, with presentation in multiple forums both within and beyond UNSW. For example, the use of peer review, in general and in Moodle, in the context of Year 1 Science courses, was presented at the Learning and Teaching Forum in S2, 2011, as well as at UNSW Canberra soon after the Forum and at their inaugural Learning and Teaching Day on 9 May 2012. Further, use of the existing and modified Moodle Workshop activities and the Mahara ePortfolio tool were presented at both MoodlePosium and the annual Ascilite conference, with a full workshop on the Workshop conducted at MoodlePosium. Within UNSW, both innovations have been presented in a range of forums. It is the availability of the forums that is particularly important, providing the means for socialisation of new and interesting practices, at different levels, to different audiences across academic and general staff. This has afforded development of a greater community of practice among not only academic staff but also educational technologists and developers, across several faculties, which is now leading to wider and broader use of the Workshop activity and richer conversations about the use of technology in learning and teaching, including tools for assessment. The Assessment Project appears to have injected critical energy into these communication channels and forums. Outreach to other faculties and universities

The Faculty of Science has conducted several workshops, alone or in collaboration with other faculties (notably Medicine and Engineering), during the period of the Assessment Project. Aside from a range of communication activities, some of which are described above, several workshops have involved staff from Science in their

254 Part II

ImprovingAssessmentText2Proof.indd 254

11/11/13 3:36 PM

organisation and/or their presentation, as follows: ʶʶ ‘Undergraduate Course Assessment’. ʶʶ ‘Assessment within the Practical/Clinical Teaching Environment’. ʶʶ ‘MCQs! Let Multiple Choice Questions Take the Work Out of Exams’, presented by Professor Orrell. ʶʶ ‘How to Get the Most Out of Audience Response Systems: Using clickers in large class teaching’, presented by Dr Louise Lutze-Mann. ʶʶ ‘Science/Engineering Learning and Teaching Forum’ (focus on Moodle activities to support assessment; presentation and workshop). Based on current activities, we plan to run further workshops such as one on MCQs, including analytics to develop high quality item banks, and UNSW workshops on the Moodle Workshop activity. UNSW Science is taking the lead in a multi-institution bid for funding from the Office of the Chief Scientist for Smart Science, to build modules for self-directed learning and self-assessment across a range of disciplines and applications on the Adaptive eLearning Platform (AeLP). The faculty already has an impressive track record in the use of this technology. Science has also taken the opportunity to build a small library of literature resources on various aspects of assessment for use by all members of Science.

Outcomes of faculty efforts to improve assessment In school reports on assessment and learning and teaching practices, a number of key features or issues were identified as worthy of consideration as the Assessment Project progressed. The faculty needs to consider more fully the faculty-wide portfolio of assessment tasks, with respect to the range and diversity of tasks, the mix of formative and summative assessments, and the

The Faculty of Science 255

ImprovingAssessmentText2Proof.indd 255

11/11/13 3:36 PM

provision of feedback, in particular the form feedback takes and who provides it. Assessment also needs to be mapped, with regard to timing and in relation to other aspects at program level, such as coherence and increasing sophistication. There is an increasing need to map curriculums at the program level against discipline and even sub-discipline threshold learning outcomes, as well as aligning them with program-level outcomes driven by UNSW and the Australian Qualifications Framework. Conversations will need to held between academic staff responsible for those disciplines and sub-disciplines, to ensure that students are meeting these outcomes. Thus teaching and learning practices cannot be held privately, confined within a course and withheld from the curriculum map. Certainly the extensive discussions around review and revision of postgraduate coursework programs have included the definition of and recognition of performance against program learning outcomes and thus (re)ignited consideration of assessment tasks that permit their demonstration. Sequencing of assessment needs closer attention. That is, the timing of assessments needs further consideration to avoid deadlines being set in close succession, both within and between courses. Internationalisation of the curriculum is much discussed. There is increasing focus at UNSW on global education. We need to be clearer on teaching, learning and assessment in this area. It is easy to give niche examples of global education, such as student exchange, but we have not achieved a coherent view of what global education would mean for every student in the faculty. Assessment in laboratories

The faculty needs to focus, or perhaps re-focus, on assessment in laboratories, particularly the explicit assessment of practical laboratory skills. In Science, laboratory-based activities form a critical component of curriculum in most discipline areas and in many courses (including field and clinical work), with the possible

256 Part II

ImprovingAssessmentText2Proof.indd 256

11/11/13 3:36 PM

exception of mathematics (and there excluding the use of computer laboratories). While psychomotor activities are taught and some (in some cases, many) opportunities are given for practice and, potentially, mastery, there is often no formal or explicit assessment of the development of key laboratory skills. Assessment of laboratory components of courses reflects assessment in the lecture components, often involving examination and completion of written assignments, such as laboratory reports. This proposal goes somewhat beyond assessment, requiring a change in teaching and learning culture, with a focus on process rather than product. There is potential to expand the use of virtual laboratories built, for example, on the Adaptive eLearning Platform (AeLP), to other parts of the faculty – currently this usage is concentrated in the School of BABS. Feedback and group work

The faculty needs to focus on two major issues deriving from the student voice aspects of the project: feedback and group work. Issues with group work include approaches to group selection, management and assessment of group processes (as opposed to the final product), and allocation of marks (and weighting) for contributions to group tasks; these issues go beyond assessment per se. While assessment of group or teamwork skills should be examined, management of the group experience relates to the affective component of curriculum and has a broader impact on the student experience. This requires teachers to be engaged in the whole of the curricular process around the group process and task, rather than simply setting the task and marking the product; thus it requires a change in thinking, culture or practice. Certainly, poor experience of group work early in a program of study is likely to have an impact, perhaps significantly, upon student engagement with group work in later courses. Similarly, while feedback in itself closes the loop on assessment, factors such as timeliness have an impact on the student experience.

The Faculty of Science 257

ImprovingAssessmentText2Proof.indd 257

11/11/13 3:36 PM

Sophistication, authenticity and coherence of assessment

There is evidence that assessment increases in sophistication with level of course through some major sequences in Science, particularly accredited programs – but not all programs are as coherent and sophisticated as they could be. To bring all courses into conformity with the best requires a change in culture and a more public or community-spirited mentality in relation to teaching and learning, including assessment practices, particularly outside of the more professionally oriented programs and academic units. This needs further investigation, given the lower than expected or anticipated ratings for some indicators of student satisfaction. While there is evidence of increasing sophistication in assessment throughout programs, Science assessment relies, even at upper levels, on MCQs. Analytics need to be applied to question sets to ensure quality and consistency. In addition, or as an alternative, there is a need to approach the community as a source of questions (e.g. item banks) that may already have been analysed for quality. Along with sophistication, more contextualisation of knowledge and greater authenticity of tasks should be considered. Schools offering professional or accredited programs or professionally oriented courses are particularly strong on context and authenticity. There is potential to go to the community, including graduates, for examples that can be used as the basis for assessment tasks, including examination questions that drive higher order thinking and learning. Similarly, assessment items such as more open questions may derive from community groups such as discipline or regulatory bodies (e.g. professional associations). In some cases these are well tested and the faculty could leverage assessment items from other parts of the world to support, align or certify through assessment. A complete solution to some of these problems is almost impossible in flexible, generalist programs, as opposed to those that are highly structured. Mixed majors, double majors and flexibility in

258 Part II

ImprovingAssessmentText2Proof.indd 258

11/11/13 3:36 PM

course choice within majors mean that it is difficult to ensure that students do not miss or cannot avoid certain forms of assessment (e.g. oral communication tasks). But efforts to resolve these issues can expose students to ‘overkill’ when the same assessment modes are employed in too many courses. The challenge is to ensure program outcomes when such a wide range of courses can be taken. While Science has begun the process, the faculty needs to continue to map the most common pathways through the various major sequences, identifying the courses most commonly taken and using these courses to provide coherence in assessment. Approaches taken by some of our schools (e.g. Psychology) or by professional accreditation bodies (e.g. the Institute of Food Technologists) can inform curriculum (including assessment) mapping. Ultimately, the faculty may need to look at program design, from a curricular viewpoint, with respect to technical content or knowledge and also to how the attainment of all threshold learning outcomes – including discipline knowledge, skills and even attitudes – can be taught and learned, assessed and warranted. Governance and socialisation

There is good practice in curricular quality assurance in general throughout Science but governance around teaching and learning, including assessment practices, is somewhat variable. Best practice is yet to be communicated across the faculty. School advisory committees provide discipline expertise that can be leveraged to support review of majors, facilitating deeper analysis during academic program review. Greater attention needs to be paid to the role of teaching and learning committees, ensuring appropriate representation and enhancing communication with stakeholders. This last is generally well managed, though there is always opportunity for improvement, particularly with respect to so-called ‘service teaching’ for other faculties. This is an important part of Science’s teaching activity, of which assessment is a critical component.

The Faculty of Science 259

ImprovingAssessmentText2Proof.indd 259

11/11/13 3:36 PM

Further, there is a need to consider the less formal but no less important socialisation of teaching and learning practice, including assessment, both within schools and among staff across and outside the faculty. This needs to move beyond formal governance to ensure greater staff engagement. Rather than having senior staff speak at forums – and indeed speak at staff – channels of communication need to be considered carefully, crafted to suit almost individual circumstances. In some cases staff members who serve as champions or change agents are able to effect change through either innate or explicit leadership qualities or roles, while other staff members need a forum in which they can promote their innovations in assessment. For still others, such forums are places to obtain and assimilate ideas, and ultimately develop and change their practice. To encourage socialisation of best assessment practice, the faculty could introduce a recognition-reward system for best practice in assessment, ensuring as part of such a system that good practice is disseminated throughout the faculty, and beyond. In the first instance, feedback might be a focus for such a scheme. Marking

There are issues around the production of marks and grades, including moderation (and definitions thereof ), scaling and norm versus standards- or criterion-based assessment. It appears that marks are still being given in some courses purely for attendance. When participation is marked, the means by which marks are allocated needs to be transparent, equitable and reliable. In particular, the dynamics of the classroom must be taken into account to ensure inclusivity. If oral communication as a graduate attribute is to be warranted, participation must be carefully assessed to ensure that individuals have the opportunity to achieve. Amalgamation of 3UoC into 6UoC courses has created difficulties for assessment, including grading. In some cases it has proven difficult to expand 3UoC courses into 6UoC and produce full and

260 Part II

ImprovingAssessmentText2Proof.indd 260

11/11/13 3:36 PM

coherent curriculums. These courses remain two-in-one and create difficulties in the assessment of students in two seemingly related but actually quite distinct areas. Better course design, rather than assessment design, would seem to be a key priority here. The costs associated with assessment are increasing. The Enterprise Bargaining Agreement as it applies to casual staff (in particular, demonstrators and tutors) has increased dramatically the cost of marking outside the classroom. This is likely to increase the marking burden on academic staff if budgets constrain the use of casual staff, though peer review and certainly assessment may alleviate the burden. This increases the incentive for more efficient assessment. Work on peer review and assessment, particularly using the Workshop activity in Moodle, needs to be socialised across the faculty. Peer practices can enhance learning and improve the quality of student work, improving learning outcomes and the products submitted for marking, thereby reducing workload in assessment, as it is easier and faster to mark good-quality work.

Conclusion Recent meetings and encounters with Heads of Schools in various formal and informal forums have highlighted the need to better understand how we teach. We know assessment to be a critical component of the teaching and learning process and it is clear that there will always be room for improvement in assessment practices. The Assessment Project served as a very good beginning for such improvements – but still only a beginning, highlighting the need for the faculty to maintain internal and external conversations around assessment. This chapter should not be considered a definitive or exhaustive list of topics for those conversations; it only indicates some matters our participation in the Assessment Project prompted us to notice, and to which the Faculty of Science might profitably turn its attention.

The Faculty of Science 261

ImprovingAssessmentText2Proof.indd 261

11/11/13 3:36 PM

Reference Cox, J, Posada, JP & Waldron, R (2012) ‘Moodle Workshop Activities Support Peer Review in Year 1 Science: Present and future’, Paper presented to the Ascilite conference 2012, .

262 Part II

ImprovingAssessmentText2Proof.indd 262

11/11/13 3:36 PM

12

UNSW Canberra: A university within a university David Blaazer and Richard Henry

UNSW Canberra is a campus of UNSW and is located at the Australian Defence Force Academy (ADFA). UNSW Canberra provides undergraduate tertiary education for the midshipmen and officer cadets of the Australian Defence Force as well as postgraduate programs in Arts, Business, Information Technology, Engineering, Management and Science. In 1967 UNSW entered into an agreement with the Australian Defence Force to establish the Faculty of Military Studies at the Royal Military College (RMC), Duntroon, to deliver degree programs in Arts, Science and Engineering and into an association with the Royal Australian Naval College to present approved courses. In 1974 the Commonwealth Government announced its intention of establishing a single tertiary institution for the Australian Defence Force. UNSW worked closely with the Department of Defence to establish an institution with academic integrity. The Australian Defence Force Academy opened in January 1986. The most recent agreement between UNSW and the Commonwealth was signed in December 2009. The 2009 agreement states: The primary aim of academic studies at ADFA is to provide midshipmen and officer cadets with a balanced and liberal

263

ImprovingAssessmentText2Proof.indd 263

11/11/13 3:36 PM

undergraduate education, with a view to establishing the foundation of tertiary knowledge, skills and aptitudes that is required of them in their military profession. These studies occur in a military environment. A second aim is to instil in Defence sponsored students the higher order critical and analytical thinking skills, research and problem solving abilities and communication skills needed to enable them to operate effectively in an increasingly complex Defence environment.

Undergraduate students are in residence at ADFA. In addition to their academic studies, midshipmen and officer cadets undertake programs of military training at the Academy and at service training establishments around Australia. There are more than 1000 undergraduate students enrolled in bachelor degrees in Arts, Business, Engineering, Science and Technology; there are approximately 330 higher degree research students and 1300 postgraduate course work students. Almost all of the higher degree research (HDR) students are civilians, of whom 56 per cent are international students. Of the postgraduate course work students, around 80 per cent are serving military officers or other employees of the Department of Defence, sponsored under the terms of the agreement. The executive head of UNSW Canberra is the Rector. The four schools are Business, Engineering and Information Technology, Humanities and Social Sciences, and Physical, Environmental and Mathematical Sciences. The environment in which UNSW Canberra operates is fundamentally different from the rest of UNSW. In particular, as a result of the financial provisions of the agreement between UNSW and the Department of Defence, UNSW Canberra enjoys far more favourable staff to student ratios than any other part of the university sector in Australia. In exchange for the level of funding, the Department of Defence expects and receives a higher aggregate commitment of staff time to teaching than students typically receive elsewhere. This includes feedback on assessment tasks and frequent reporting on student progress.

264 Part II

ImprovingAssessmentText2Proof.indd 264

11/11/13 3:36 PM

In this context there is no clear imperative to make major changes to the efficiency of assessment. Furthermore the UNSW Canberra campus does not have the same difficulties in staging examinations as the Sydney campus. All examinations can be comfortably accommodated with the buildings of UNSW Canberra and incur no venue costs. Potential savings in invigilation costs by reducing the number or duration of examinations are trivial. This should not be misinterpreted to suggest that there is no scope for improvements in the efficiency of assessment. A few courses have over assessment and redundant assessment. Moreover, as UNSW Canberra increases its focus on distance education in postgraduate coursework, there is ample scope to promote the use of online tools with the potential to make assessment less time consuming, while maintaining or even increasing the quality of feedback to students. However, our view was that the primary aim of the Assessment Project at UNSW Canberra was to increase the quality of assessment practices. Efficiency of assessment was a secondary goal with a desire to ensure that increasing the quality of the assessment was achieved without an increase in assessment workload for staff. An audit of courses revealed significant opportunities. While virtually all courses matched assessment to learning outcomes (and vice versa), some outcomes were very heavily assessed while others were assessed tangentially or tokenistically. Assessment practices in many courses were unimaginative and repetitive, displaying little evidence of serious reflection on how best to engage students, stimulate learning or assess their attainment of the course outcomes. The repertoire of assessment practices in courses across some degree programs was extremely limited, resulting in students repeating the same few types of tasks course after course throughout their studies. Additionally, the design of some assessments made them extremely vulnerable to undetected plagiarism, especially in a tight-knit community of students trained in an ethos of teamwork, who are readily able to circulate student work between year groups.

UNSW Canberra 265

ImprovingAssessmentText2Proof.indd 265

11/11/13 3:36 PM

Policy development and enforcement This strategy aimed to develop school based policies and procedures on assessment that complied with overarching UNSW policies and procedures. The School of Humanities and Social Sciences already had a policy. This was updated and brought into line with the revised UNSW policy. Our impression was that an adequate policy evolved into an even better one. Neither the School of Engineering and Information Technology nor the School of Physical, Environmental and Mathematical Sciences had an assessment policy. Both had meaningful academic discussions and developed strong working drafts which continue to evolve. The School of Business had relied on its work of external accreditation with the Association to Advance Collegiate Schools of Business (AACSB) and took some time to understand that the statements about assessment that existed in course outlines and the procedures that were in place did not represent an assessment policy. Leadership transition has occurred in Business and the development of an assessment policy will be part of that outcome. UNSW Canberra has established a Learning and Teaching Advisory Committee as the reporting mechanism through which we can ensure that there is compliance with the policies and procedures that have been developed.

Staff education and development Staff education and development unfolded at both a macro and an individual level. At the macro level, a series of colloquia and workshops have rolled out around specific issues relating to assessment. In 2012, a highly successful learning and teaching day was attended by more than half of the UNSW Canberra academic staff, with presentations made by 12 UNSW Canberra staff, a student panel discussion and contributions from senior staff at UNSW. PowerPoint presentations from the day were made available to

266 Part II

ImprovingAssessmentText2Proof.indd 266

11/11/13 3:36 PM

all staff on the UNSW Canberra Learning and Teaching Group (LTG) website. A DVD of the presentations was developed for loan from the LTG. The day was instrumental in stimulating discussion and reflection on assessment. It also provided an opportunity to increase the profile of the Coordinators of Learning and Teaching Development for each of the schools. These Coordinators took up positions in early 2012 as half-time secondments. A central component of the role has been to drive the Assessment Project and the activities that have arisen from it. The fact that more than half the academic staff attended the learning and teaching day was regarded as an extraordinary outcome in its own right. This success has since been repeated, with a 2013 learning and teaching day focused on online learning and first year learning and teaching. There was also a strong emphasis on assessment in those contexts. The 2012 event was the first of its kind at UNSW Canberra. Not only did Heads of School encourage their staff to attend, they all attended themselves for at least part of the day. There was also an opportunity to meet the new Learning and Teaching group. All of the speakers were internal to UNSW, and, apart from an excellent opening address by Associate Professor Julian Cox (ADE, Faculty of Science), all were internal to UNSW Canberra. Evaluation was strongly positive with comments that the day had been valuable, that the time had been well managed, that it was excellent to hear one’s colleagues, and that the sharing of insights was valuable. Staff recognised that the day had helped to make teaching and assessment a legitimate and important topic for collegial discussion, rather than the private matter between individual academics and their students they were accustomed to. The learning and teaching day has led to a program of workshops, whose key aim has been to increase staff awareness of the range of assessment tasks and the resources available to support innovation and improvement. A series of workshops focused specifically on designing assessment to minimise plagiarism has been developed for delivery in 2013.

UNSW Canberra 267

ImprovingAssessmentText2Proof.indd 267

11/11/13 3:36 PM

At the individual level, the analysis of the curriculum mapping exercise conducted in 2011, which mapped all the assessments of courses to their learning outcomes, identified some courses where alignment was poor. The Coordinators of Learning and Teaching Development, with the support of their school leadership, have worked collaboratively with course coordinators to improve alignment. This is part of a broader exercise in curriculum mapping and is discussed in the next section.

Curriculum mapping The broader exercise in curriculum mapping was more challenging than anticipated but more productive as well. When the process began, some academics regarded the issues as being limited to the substantial and important process of ensuring alignment between teaching and assessment in individual courses and learning outcomes. It took longer to appreciate that there needed to be collegial discussions of assessment across majors and disciplines and across programs to ensure that assessment was linked not just to course objectives but also to graduate attributes. This played its way out differently in different programs. A few examples are instructive. Two of the areas where extensive work was undertaken, with a number of iterations, were in History and Politics. The mapping showed that an underlying problem was inadequate reflection on the relationship between the overall curriculum of the majors and learning outcomes. The response in History resulted in a complete renovation of the first year curriculum (including assessment). This renovation was guided by the threshold learning outcomes developed for History in a highly collegial process under the auspices of the Australian Learning and Teaching Council (ALTC), before the Commonwealth Government disbanded it in 2011. These outcomes were identified as learning outcomes that should be achieved by UNSW Canberra students by the end of the first year of their History major. This was to be the scaffolding to enable achievement

268 Part II

ImprovingAssessmentText2Proof.indd 268

11/11/13 3:36 PM

of relevant program attributes in later year courses. Prior to the review there had been no difference between second and third year courses. The decision was made to develop third year courses and that the third year courses would function as capstone courses. The new first year courses ran for the first time in 2012. Staff and student feedback identified ongoing issues. The pass rate was similar to previous first year cohorts. However, there was disaffection among students, some of whom felt the demands on them were ridiculously difficult while others thought that they were being micromanaged and that the courses were too easy. Staff feedback reflected that the faculty had been too ambitious in trying to achieve all the learning and teaching academic standards in first year. This led to a second iteration, which ran in 2013. Some of the academic standards are expected to be achieved by the end of first year, some are merely introduced in first year and some are left entirely to subsequent years. Third year courses have been developed with assessment addressing the capacity to locate, sift and analyse primary source documents, higher order analytical skills and reflective abilities. Curriculum mapping in Politics gave rise to systematic – and largely unprecedented – discussions among academics and exposed significantly differing views concerning the fundamental purpose and aims of the discipline in the unique context of UNSW Canberra. These productive discussions resulted in a change of name to International and Political Studies. First year was redesigned, with a new team given responsibility. A similar process has occurred as in History, with a clearer development of the curriculum and assessment from first year through second and third years. The curriculum mapping across the disciplines in the BA revealed an underlying concern that the degree tended to be an incoherent series of courses rather than an integrated program of study. Historically, a small number of students who had enrolled initially in a Bachelor of Information Technology had transferred into an Information Systems major within the BA and still been

UNSW Canberra 269

ImprovingAssessmentText2Proof.indd 269

11/11/13 3:36 PM

allowed to complete in minimum time. A review of the BA concluded that helping a handful of students to transfer from another degree made it more difficult to achieve graduate outcomes in a coordinated fashion. The Rector has accepted a recommendation that the BA should require two majors of 48 units of credit each in a 144 unit of credit degree. What will be expected of students is now more demanding because the development of complexity of assessment now tests higher-level analytical skills explicitly. This has highlighted the concerns that some academic staff feel about a growing gap that they perceive between their expectations of students and student performance. In particular, a view has been expressed that the renewal of the curriculum and the alignment of assessment with desired learning outcomes had merely served to expose many students’ inability or unwillingness to engage in anything other than superficial learning. According to this view, this had been masked by the previous poor assessment practices of the disciplines, which could be completed to pass standard without genuine engagement with the intellectual content of the course. Of course some will argue that this is part of the perennial grumbling of academics about the deficiencies of students and that the onus of responsibility is with the academics to be more effective teachers. Others will argue that students are adept at doing just enough to jump the assessment hurdles that are placed in their way and that it will only take a year or two for students to adjust to the new, more demanding standards expected of them. In this context it is instructive to look at the curriculum mapping that occurred in Mathematics and Engineering. It is also important to note comments made in ADFA: The first 25 years of the Australian Defence Force Academy: It has been said by Academy academics that they are the only ones who have to confront the issues induced by the differing curricula used by Australia’s eight state and territory jurisdictions as, unlike other universities, their catchment area is truly

270 Part II

ImprovingAssessmentText2Proof.indd 270

11/11/13 3:36 PM

national. The unfortunate consequence is that students arrive at different shapes of preparation for tertiary education and with quite different levels of secondary school achievement in the sciences and mathematics. Even English expression and comprehension levels are different across the states and territories. The need to bring each student up to a common standard of understanding had been noted in Chemistry as early as 1992, and an internal survey in 2003 demonstrated the gaps which existed, particularly in English and Mathematics, with the result that remedial courses in those subjects had to be implemented to bring all students to a common standard as quickly as possible.

At UNSW Canberra, Mathematics for Engineers is taught by academic staff from the discipline of Mathematics. The Engineering academics have worked closely over the years with the Mathematics academics to ensure that the curriculum covers the concepts that are required for engineers. As part of the curriculum mapping and assessment project, the course was strengthened so that students would have the scaffolding required for their Engineering courses. The engineers indicated to the mathematicians that the curriculum was fine. However, two problems were identified. The first was that students from the state of Queensland appear to under-perform in university studies at UNSW Canberra in both computational and textual disciplines in every student cohort, suggesting both a statebased disadvantage and a failure of first year teaching to bridge the gap. The second was that there was a significant disconnect between the level of attainment in first year Mathematics courses by Engineering students and their ability to apply their mathematical knowledge to engineering problems. The improved assessment in first year Mathematics that resulted from the Assessment Project did not appear to provide any immediate gains. We have explored the possibility that the academics who teach second and third year Engineering are poor teachers who are

UNSW Canberra 271

ImprovingAssessmentText2Proof.indd 271

11/11/13 3:36 PM

unwilling to make the necessary modifications to their teaching practices in order to ensure that students master the principles. This does not seem to be the key issue. Our working hypothesis is that many of the students are most comfortable with rote learning and find it difficult to apply learning from one area to another. Clearly this is an ongoing issue. What started as a simple belief that better teaching and assessment that was clearly aligned to future learning needs would be effective has proven to be a more nuanced problem, requiring the systematic development of institution-wide approaches.

Deployment of technology The learning management system at UNSW Canberra was outdated, difficult to use and limited in its effective application. Crucially, it did not connect effectively to many currently available ‘plug-ins’ relevant to efficient assessment. The decision was therefore made in 2012 to replace the LMS with Moodle 2 and to use the UNSW Technology Enhanced Learning and Teaching (TELT) platform. As part of this transition, a number of staff – particularly in the schools of Business, and Humanities and Social Sciences – started to explore the use of GradeMark. The response by academic staff to the opportunities provided by GradeMark in marking assessments was enormous. Feedback from staff in the School of Business revealed the double gain of efficiency and effectiveness in marking of assessments. There was rapid uptake in its use, facilitated by training. Staff reported a major efficiency bonus, with academics noting time savings in the assessment of online submissions in the order of 30 per cent. We were aware that the Australian School of Business and the College of Fine Arts in Sydney were using ReView and that GradeMark was being used more by UNSW Kensington academics in Engineering and Science. However, we saw no need to test anything other than GradeMark for Business and the Humanities.

272 Part II

ImprovingAssessmentText2Proof.indd 272

11/11/13 3:36 PM

Improvement of laboratory-based assessment Consideration of the vulnerability of some forms of assessment to plagiarism led to a focus on the improvement of laboratory-based assessment. This consideration went beyond the question of easy plagiarism of laboratory reports to issues of the quality of learning in laboratory-based education and the effective use of student and staff time in laboratories. Associate Professor Paul Tranter, the Coordinator of Learning and Teaching for the schools of Engineering and Information Technology and Physical, Environmental and Mathematical Sciences, was tasked with investigating teaching and assessment practices in laboratories at UNSW Canberra, identifying examples of good practice both in Canberra and at the Kensington campus and proposing strategies for improvement. Part of the context was the nature of the student experience at UNSW Canberra. All the undergraduate students are officer cadets and midshipmen and teamwork is seen as a key attribute that is needed for success in the Australian Defence Force. In addition, all undergraduates live onsite in campus accommodation. The challenge was how to address plagiarism in an environment in which working together was seen as essential, while continuing to foster appropriate forms of academic collaboration among students. Below is a list of the various approaches to plagiarism in laboratory work considered in the report: ʶʶ making the course and university policies on plagiarism clear to students ʶʶ behaviour change strategies to encourage students to work independently ʶʶ policies on group work ʶʶ design of laboratory assessment tasks to avoid repetition of previous years’ work and to require types of thinking that make it harder to copy ʶʶ careful supervision of lab work by demonstrators in a situation of high student to staff ratio

UNSW Canberra 273

ImprovingAssessmentText2Proof.indd 273

11/11/13 3:36 PM

ʶʶ ʶʶ ʶʶ ʶʶ

policies on proportion of marks assigned to laboratory work use of Turnitin a variety of forms of student report good teaching strategies. The following paragraphs are taken from the Tranter report. Most academics believed that plagiarism was a challenging issue in laboratory work, and that this potentially compromised the learning outcomes of the laboratory classes. The key issue in laboratory work is not plagiarism, it is how to design and deliver sufficient high quality labs to deliver learning outcomes appropriate for a course … An example is examining how to design a brake hub for a vehicle and the temperature range that bearings will work in grease, with differential expansion of different metals. You can talk about this but it is not as effective as students designing a lab that demonstrates it. One academic commenting on poor design of laboratory classes said, ‘In some labs, students just don’t know what they are doing, or what they are supposed to be doing’. Many staff pointed out that an important starting point for addressing plagiarism … is giving students a clear understanding of what plagiarism is, and the course and university policies on plagiarism. Students need to understand the degree to which they are allowed or encouraged to work in groups, and the point at which they are required to work individually. Some plagiarism is done by students not understanding what it means. Related to this point is the issue of capacity – giving students the ability to do the required work by themselves, as well as the skills to appropriately acknowledge the source of material used to support the writing up of lab reports (referencing skills). Students are much less likely to engage in plagiarism

274 Part II

ImprovingAssessmentText2Proof.indd 274

11/11/13 3:36 PM

when they have the skills to complete tasks capably without resorting to shortcuts.

As well as making the policy on academic misconduct clear to students, another strategy seen to be important in relation to plagiarism is the clarity in the design of learning outcomes and assessment criteria that support each learning outcome. A body of research in behaviour change has repeatedly verified that simply providing information rarely has an effect on promoting desired behaviours. One idea from the behaviour change theory that can be successfully applied to learning and teaching is based on the ideas of normative beliefs and social proof. The key concept here is that individuals have an inherent tendency to conform to the norm … Students are more likely to conform to expected standards of assessment if they believe that … their peers are doing the same. Applying this to assessment can be as simple as using results from Moodle in previous years to show students how the ‘majority’ of students behave. For example, in one course students were shown results from the previous year’s course, where 95% of students were shown to submit their assignments before the due date … [and] were told that all of these students satisfied the requirements about academic honesty … Simply telling students not to plagiarise is an ineffective strategy. In contrast, explaining that academic honesty is the ‘norm’ has a strong positive impact on student behaviour regarding plagiarism. Many academics had policies on group work that allowed (even encouraged) group work in some circumstances and then made it clear that some assessment tasks were expected to be completed independently.

These excerpts make it clear that academic staff were deeply

UNSW Canberra 275

ImprovingAssessmentText2Proof.indd 275

11/11/13 3:36 PM

engaged in issues related to laboratory classes at UNSW Canberra. It was recognised that these were issues shared across universities around the world. Although many positive suggestions were made and examples of good practices identified, the underlying atmosphere was one of frustration, even hopelessness, that a perfect solution had not been identified. As part of the peer support process among the ADEs, each would present regularly to the other ADEs, to the Director of Learning and Teaching and to the Deputy Vice-Chancellor (Academic). The discussion about plagiarism in laboratory classes at UNSW Canberra was a particularly memorable occasion. The ADE (UNSW Canberra) identified that the students were encouraged by the military to work in teams and that this made plagiarism in lab classes problematic. The ADA (Engineering) provided a ‘solution’ by indicating the approach that he had taken. Essentially he devoted a certain number of laboratory sessions for students to learn the relevant concept. They were free to do this either in groups or individually, depending upon what worked for them. Incorporated within the timetable of the practical classes were regular classes where individual assessments occurred, in which each student needed to show that they had mastered the skills. All of the ADE group recognised that this approach would translate well to the UNSW Canberra context. In fact the Tranter report had alluded to precisely this approach as one of the ways to improve laboratory-based assessment. It is hoped that the academics at UNSW Canberra will implement many of their own recommendations. None individually nor all collectively will eliminate plagiarism but there is every reason to believe that significant progress will be made.

The Assessment Project as a driver of institutional change One of the principles informing the UNSW Assessment Project was the idea that assessment is one of the most fundamental actions

276 Part II

ImprovingAssessmentText2Proof.indd 276

11/11/13 3:36 PM

in the whole teaching and learning process, and that action focused on assessment must inevitably provoke reflection on teaching and changes in student learning. While this view has been borne out at UNSW Canberra, the Assessment Project has – more surprisingly – become a catalyst of significant, ongoing institutional change. In essence, the Assessment Project exposed deficiencies in UNSW Canberra’s capacity to engage in systematic, campus-wide reform in learning and teaching, while also becoming the trigger for further important projects. It exposed, first of all, the lack of any suitable forum for the exchange of ideas and information among school leaders on learning and teaching issues. The UNSW Canberra Education Committee is less successful at this than its counterparts in UNSW faculties owing to the disciplinary diversity of its membership: engineers and literary scholars are rightly reticent when commenting on the intellectual or pedagogical validity of each others’ courses and programs. On the other hand, the long-established Teaching and Learning Committee – although it had performed some excellent work in the past – was hamstrung by the fact that none of its members held positions that enabled them to be effective change agents in their schools. Neither body could therefore play a useful role in the assessment project or other major reforms that might follow. To solve this problem, UNSW Canberra has established a Learning and Teaching Advisory Committee (LTAC), with the ADE as Presiding Member and whose members include the Deputy Heads responsible for learning and teaching in each school. Free of the requirement to formally approve course and program proposals, the LTAC is able to focus on projects and to ensure a free exchange of ideas and information and the degree of consistency of approach required to underpin success. Another key change catalysed by the Assessment Project was the restructuring of the position of Coordinator, Learning and Teaching Development (CLTD). Previously, the position was held by a full-time academic appointed specifically to the role. Almost inevitably, that individual was regarded as ‘belonging’ to his or her native

UNSW Canberra 277

ImprovingAssessmentText2Proof.indd 277

11/11/13 3:36 PM

discipline area by people outside that area, yet not fully accepted as such by people inside it. The CLTD was therefore unable to gain significant traction in schools to drive the project, or indeed any strategic initiative. Recognising that this was a structural problem, the UNSW Canberra management initiated a restructure of the position. The role of CLTD is now filled by three members of the UNSW Canberra academic staff seconded on a half-time basis for a fixed term. One works principally with the schools that teach Science and Engineering disciplines, another works primarily with the schools that teach Arts and Business disciplines, while the third focuses on online learning across the board. The restructure has not only allowed the CLTDs to work more productively with their designated schools, it has also vastly increased the flow of information and insight across disciplinary boundaries. The fact that the CLTDs are also members of the LTAC has assisted this process further. In the course of their work on the assessment project and attendant curriculum mapping, the CLTDs have uncovered deep and wide concerns about the quality of learning and teaching in first year. This has led to a new project specifically to deal with that issue. At the time of writing, the CLTDs had consulted widely with academics to establish the extent and nature of the concerns, carried out some analysis of available data to ascertain the extent to which academics’ perceptions are empirically justified, developed options for diagnostic testing of and appropriate support for poorly prepared new students, and obtained agreement from the UNSW Canberra executive team that any approach to these problems must be consistent across disciplines. It is expected that this project will result in significant change in 2014 and beyond.

Conclusions The Assessment Project has engaged staff at all levels across UNSW Canberra. Learning, teaching and assessment have become areas

278 Part II

ImprovingAssessmentText2Proof.indd 278

11/11/13 3:36 PM

discussed more publicly by academics, which represents a major shift from their traditional private nature. The quality of assessment has improved, albeit with significant challenges in ensuring that students achieve the program learning outcomes that have been identified. Although efficiency in assessment was seen as very much a secondary aim of the project, marked efficiency gains have been achieved in marking and feedback by the use of GradeMark. Much work remains, and in particular deployment of technology in assessment remains an area with untapped potential. In the meantime, the Assessment Project has been the catalyst of important structural improvements, and has triggered other important initiatives for the improvement of learning and teaching at UNSW Canberra

Bibliography Lovell, D (ed.) (2012) ADFA: The first 25 years of the Australian Defence Force Academy, UNSW Creative Media Unit, Canberra.

UNSW Canberra 279

ImprovingAssessmentText2Proof.indd 279

11/11/13 3:36 PM

ImprovingAssessmentText2Proof.indd 280

11/11/13 3:36 PM

PART III

ImprovingAssessmentText2Proof.indd 281

11/11/13 3:36 PM

13

Faculty responses to the Assessment Project Stephen Marshall, Richard Henry and Prem Ramburuth

The previous nine chapters have dealt faculty by faculty with the varied responses to the challenge posed by the Assessment Project to improve the efficiency and effectiveness of assessment at UNSW. This chapter synthesises the information in those nine chapters to tell the overall story of the faculties’ responses to the Assessment Project.

Initial responses to the challenge When first approached to participate in the Assessment Project, the faculties generally responded positively. Several immediately saw that the aims of the Assessment Project could be aligned with their own current aims and priorities and that the project’s funding would provide further impetus for their existing efforts to address learning and teaching development and/or issues. The Australian School of Business (ASB) and the Faculty of Arts and Social Sciences (FASS), for example, had already identified assessment as an area in need of review and development. The Faculty of Law was rethinking the curriculum associated with its law degree with a view to increasing its emphasis on practical

282

ImprovingAssessmentText2Proof.indd 282

11/11/13 3:36 PM

skills, professional values and experiential learning. It was pleased to have the opportunity to review its assessment practices to ensure better alignment between assessment tasks and these new learning outcomes and priorities. The Faculty of the Built Environment (FBE) and the College of Fine Arts (COFA) saw the project as a new driver for existing efforts in curriculum change and renewal and as a timely opportunity to address the issue of assessment in creative disciplines, particularly the issue of assessment of learning in studio-based teaching environments. At the time the Assessment Project was announced, the Faculty of Engineering had already begun a program-level assessment audit as part of its regular process of seeking program accreditation from Engineers Australia. Consequently, the Assessment Project was seen as a complementary activity that could be used to further engage staff with the need to review assessment practices as part of this externally mandated accreditation process. Coincidentally, the ASB was also intent on pursuing international accreditation with the Association to Advance Collegiate Schools of Business (AACSB) and also saw the Assessment Project as assisting them to engage in this enterprise. Efforts to align the processes and outcomes of the Assessment Project with existing faculty goals and priorities for learning and teaching development were perceived to have a number of benefits. It increased the flexibility of the environments within which both activities could be conducted, allowing each to influence the other and, in doing so, providing more scope for reform. The complementary nature of the projects meant that, in many cases, staff resistance to meeting the challenges of what might have otherwise been perceived as a centrally imposed project to reform assessment was significantly reduced. In fact, as the Associate Dean Education (ADE) in the Faculty of Law observed, ‘many staff may have not initially realised that two projects were running concurrently’. Other faculties had no such existing projects but saw the

Faculty responses to the Assessment Project 283

ImprovingAssessmentText2Proof.indd 283

11/11/13 3:36 PM

Assessment Project as an opportunity to look more closely at their assessment practices to ensure that they were aligned with best practice for their disciplines and current pedagogical thinking. The Faculty of Medicine, for example, acknowledged that the quality and maturity of its assessment methods was uneven across programs and did not serve the diversity of their students’ assessment needs well. The faculty was eager to remedy this by leveraging the expertise and resources that would be at its disposal through project funding. UNSW Canberra was aware that it had an issue with over-assessment and redundant assessment in some courses and was curious to try online methods to widen the range of assessment types it used and to make assessments more engaging. Further, UNSW Canberra wished to explore how it might reduce plagiarism and improve the quality and consistency of the feedback given to students on assessment tasks. In welcoming the Assessment Project, a number of faculties indicated that it would provide them with the stimulus and opportunities they needed to critically reflect on their current assessment practices to ensure that they were at least ‘fit-for-purpose’ and indicative of ‘good practice’ as currently described in the scholarly assessment literature. The Faculty of Law, for example, welcomed the project, saying that, ‘with the growth of the faculty, a generational turnover of staff and the increased emphasis on research outputs in recent years, the broader discussion of teaching practice and assessment had declined … the faculty had not had the opportunity to collectively reflect on its own practices and to consider how it is situated in the broader educational environment’.

Rationales for responses to the Assessment Project Although a clear, concise rationale for the approach adopted by each faculty in responding to the Assessment Project’s goals was rarely articulated, the faculties provide sufficient information in

284 PART III

ImprovingAssessmentText2Proof.indd 284

11/11/13 3:36 PM

their descriptions of what they did and how they went about it for inferences to be drawn in relation to the motives. The ASB was the exception to this in that they provided a very full rationale for their response to the project. Indeed, the faculty noted its aim in conducting its initial assessment audit was to provide insight into its current assessment practices and to generate a database of baseline activities and approaches. The aims of the other activities associated with the ASB’s response can be summarised as: ʶʶ increasing the faculty’s awareness of assessment practices ʶʶ improving measures of effectiveness of assessment tasks, to improve the student experience ʶʶ improving the efficiency of assessment generally, to reduce the academic workload ʶʶ ensuring good practice in assessment and seeking buy-in from academics across the disciplines ʶʶ fostering and encouraging innovations in assessment, including strategies for facilitating higher order thinking, developing graduate attributes and incorporating the use of technology ʶʶ providing training and development for academics to ensure that good assessment practice was implemented within the disciplines and throughout different course levels ʶʶ meeting distinctive educational needs in the disciplines while supporting academics to improve efficiency and effectiveness in assessment ʶʶ enhancing the coherence of assessment across the faculty ʶʶ strengthening the alignment between faculty assessment processes and practices and the university’s assessment policies, procedures and aspirations. Overall, the ASB acknowledged that it needed new and better approaches to improving the effectiveness and efficiency of assessment, together with the development of systematic and comprehensive assessment literacy among its staff, to ensure that their

Faculty responses to the Assessment Project 285

ImprovingAssessmentText2Proof.indd 285

11/11/13 3:36 PM

assessment rules and practices were not being driven by taken-forgranted, unquestioned and possibly out-dated academic beliefs and traditions. Further, in acknowledging the impact of assessment design on what and how students learn and to ‘ensure the best possible ... learning journey’ for students, the faculty agreed that ‘opportunities for ... staff development in assessment design’ would be critical to the success of their efforts to improve the quality of assessment. To this end, much effort was put into providing staff with the advice and guidance necessary to develop their capabilities to assess ‘higher order learning’ through ‘high quality question design’, including the use of ‘realistic scenarios and simulations’ and ‘authentic assessment tasks’. The Faculty of Law acknowledged that at the start of the Assessment Project staff generally did not regard assessment ‘as something that ... builds towards a set of agreed outcomes in a formative way other than through the gaining of expertise via repetition. There is an emphasis on assessment for summative purposes.’ For this reason, the faculty aimed, through the Assessment Project, to begin a process of cultural change, increasing staff’s ‘intellectual engagement with the purposes and effects of assessment in learning and making the Law School’s practice of assessment more responsive to both student and professional needs’. In addition, the faculty believed that it was relying too heavily on ‘osmosis’ to teach ethical awareness and professional values, and that the increasing numbers of students and sessional staff meant that this approach was increasingly less likely to develop the desired learning outcomes. Thus, the faculty believed it needed to more explicitly incorporate the development of these values into its degree program and to design appropriate assessment tasks around them. In conversations with Assessment Project leaders in COFA, it became clear that they had interpreted ‘effectiveness’ to mean ‘assessment-as-learning’ and ‘efficiency’ to mean ‘enhanced assess-

286 PART III

ImprovingAssessmentText2Proof.indd 286

11/11/13 3:36 PM

ment quality as opposed to quantity’ to reduce staff and student workloads. The faculty wished to develop an integrating role for assessment as a central component and driver of learning throughout its programs and courses. Thus, it determined that its response to the Assessment Project would be to redesign assessment as part of its existing project to reimagine education in the field of Fine Arts, and reshape the curriculum accordingly. UNSW Canberra, like COFA, saw its primary goal as improving the quality or effectiveness of assessment, with efficiency gains only a secondary goal. It believed that improvements in the quality of assessment would not increase the workload for staff and thus, rather than directly pursue workload reductions, UNSW Canberra chose to focus on improving the quality of assessment practices. In the absence of a specific set of definitions for ‘efficiency’ and ‘effectiveness’ of assessment being provided to faculties by institutional project managers, faculties were required to determine their own responses to the primary challenge of the Assessment Project. This led some faculties to take a considerable amount of time to determine what their response and approach might be. Science, for example, was one of the faculties that expressed concerns about the lack of clarity from the university in relation to what the primary aims or drivers of the Assessment Project were. However, in hindsight, the faculty’s ADE concluded that: Through this lack of clarity it actually became clear that the shape of the project was to be determined by each faculty, [and this was] appropriate to the … devolved governance model that … prevail[s] at UNSW.

Indeed, the faculty appreciated having only ‘the broadest of imperatives from the top’, along with ‘the freedom to create shape [to the project]’, as this provided the flexibility that it needed to align its work on the project to its own goals and priorities. The Faculty of Medicine had no quibbles about definitions.

Faculty responses to the Assessment Project 287

ImprovingAssessmentText2Proof.indd 287

11/11/13 3:36 PM

It saw the Assessment Project as an opportunity to pursue efficiency gains in relation to examinations and hoped that the use of technology would deliver them. Additionally, the faculty wanted to evaluate assessment practices in its undergraduate and postgraduate (masters) programs to ensure that they were appropriately diverse, sustainable, authentic and constructively aligned with the learning outcomes articulated for each course and program.

Organisational arrangements for the Assessment Project The position of Associate Dean is not a formal position in the university’s governance and management structure. It is up to each individual Dean to choose whether they create Associate Dean positions and to frame their position descriptions. As a result, faculty governance structures, particularly in relation to learning and teaching, are not consistent and reflect the particular mission, context and circumstances of each faculty. However, within most faculties the ADE was expected to coordinate the faculty’s response to the Assessment Project. In the case of the Faculty of Engineering (which does not have an ADE) it was the Associate Dean (Academic) (ADA), while in Medicine the ADE and the Associate Dean (Postgraduate Coursework) were jointly responsible for determining which areas of the faculty’s educational programs the Assessment Project should focus upon. While the ADEs were frequently supported to fulfil these responsibilities by a range of individuals in a variety of different roles throughout their faculties, the responsibility to define, enable or support, implement, monitor, coordinate and report upon their faculty’s response(s) to the Assessment Project was often left to them alone. Thus, the ADEs worked closely together as a group to resolve common issues that confronted them and provide each other with the necessary support to enable them to fulfil their individual and collective project management responsibilities. As the

288 PART III

ImprovingAssessmentText2Proof.indd 288

11/11/13 3:36 PM

ADE in the Faculty of Science observed: Informal monthly meetings of the ADEs provided a degree of support through the project. Indeed, the ADEs as a group possess a spirit of collegiality perhaps unmatched within UNSW. There is never a sense of competition but one of sharing. Healthy academic debate is always conducted in an atmosphere of mutual respect, despite differences as individuals and among faculties. Such conditions engender deeper levels of engagement and industry in tasks set for or by the group than might be expected given the total workload managed by each ADE.

As indicated above, ADEs were often assisted to implement their faculty’s response to the Assessment Project by a range of individuals in a variety of positions within their faculty. Depending upon their context and circumstances, faculties established or leveraged the work of one or more positions or teams within the faculty to assume responsibility for various aspects of their response. In a number of cases, faculties formally reviewed their current organisational arrangements to oversee, manage and support learning and teaching development and determined that they were inadequate to meet future needs, including their response to the Assessment Project. As a result, a number of faculties made changes to their organisational arrangements for the management and support of learning and teaching development. Some, including FASS, COFA and Law, established positions with various titles that collectively might be described as Faculty Directors of Learning and Teaching, to work at faculty level and assist the ADE to fulfil their responsibilities to assure and improve the quality of the faculty’s learning and teaching. UNSW Canberra set up three new Coordinator of Learning and Teaching Development positions to support its staff to plan, implement and review strategies to further develop learning and

Faculty responses to the Assessment Project 289

ImprovingAssessmentText2Proof.indd 289

11/11/13 3:36 PM

teaching within and across its schools and to assist UNSW Canberra to respond to the Assessment Project. Most faculties took advantage of the university’s offer to extend the funding that supported the existing Learning and Teaching Fellow (LTF) positions within each faculty, with the proviso that their work during the period of extension was to be focused on realising outcomes consistent with those of the Assessment Project. Many faculties established or leveraged a variety of new or existing working parties to assist the ADE in responding to the challenges of the Assessment Project. The FASS established a Faculty Assessment Working Party solely for project purposes. This was chaired by the faculty’s Director of Learning and Teaching (DLT) and comprised, typically for such groups, one Head of School, two Deputy Heads and two academics with extensive assessment experience. The FBE established its Educational Development Team, comprising the ADE, the faculty LTF, higher education specialists, an educational change consultant and project research and administrative staff to support faculty members to fully and actively engage in the ongoing program renewal processes, while at the same time exploring and developing ideas and initiatives to reshape program and course assessment practices. Law used project funding to employ a graduate and a current student to provide student perspectives on assessment practices and to complete its team of ADE, faculty DLT and LTF. In the case of the Faculty of Engineering, working parties that had already been established to deal with the challenge of ‘FourthYear Thesis Assessment’ and ‘Program Assessment Mapping’ projects were leveraged. In addition to these formal, ongoing or fixed term arrangements, faculties frequently deployed strategic funding provided by the university or from their own resources to engage a range of different consultants for different purposes. Some faculties including ASB and FBE used this funding to engage the university’s external

290 PART III

ImprovingAssessmentText2Proof.indd 290

11/11/13 3:36 PM

assessment consultant to advise and assist in the development of the faculty’s strategy for improving the efficiency and effectiveness of assessment practices. Others employed a variety of learning advisors, educational developers, and/or project staff to assist with curriculum mapping, assessment audits, curriculum and assessment redesign and professional development.

Initial responses to the Assessment Project In accord with the university’s intention that any change to current assessment practices be evidence based, all faculties began by conducting some kind of audit or mapping of their current assessment practices. This phase of the Assessment Project was intended to make UNSW’s assessment practices visible and thereby make them available to scrutiny, critique and revision. Customarily, assessment practices at UNSW have largely been hidden from view, often being buried in course outlines and class handouts. Consequently, it has been very difficult to gauge either the quality or appropriateness of assessment practices as they evolve from what is proposed at the time programs and courses are formally approved. In most cases these audits were based on advice from, or undertaken with the assistance of, the project’s external assessment consultant. While each faculty determined its own particular approach to auditing its assessment practices, these approaches generally involved the faculty’s Assessment Project leaders (i.e. ADEs and others) and/or LTU staff in the development of audit tools or instruments and in the preparation and dissemination of guidelines for using these tools or instruments to collect and report the data required to review current assessment practices. Project-funded research assistants generally collected data. Typically these data related to: the amount and type of assessment currently being deployed in the faculty, school or discipline, the amount and type of assessment currently being deployed in similar programs or courses in other institutions nationally and

Faculty responses to the Assessment Project 291

ImprovingAssessmentText2Proof.indd 291

11/11/13 3:36 PM

internationally, and the student and staff experience of current assessment practices. A variety of data collection approaches were employed, including: ʶʶ desk audits of program and course outlines, handbooks and information packs ʶʶ written surveys of staff and students ʶʶ focus groups with staff and students ʶʶ web and literature searches for data that could be used to benchmark current practices. The data collected from written surveys and focus groups included not only data related to the students and staff experience of current assessment practices but also ideas and suggestions about how current practices might be improved. Much of the initial work associated with the analysis and reporting of these data was undertaken with the support of the university’s external assessment consultant and staff of the LTU, who generally produced a data pack and written report that provided a rich summative description of current assessment practices. Often, individual faculties, schools or program groups would undertake further, more detailed analyses of these findings and the data upon which they were based to further explore the strengths, weaknesses, opportunities and risks associated with their current practices. The Faculty of Law, for example, found that the initial audit of its assessment practice placed all the different varieties of ‘extended writing’ assessment tasks into a single category, making its spread of assessment practice look narrower than it actually was. This necessitated further, more detailed audits of examination papers, class participation and innovative assessment practices being used in the faculty, so that the diversity of assessment practice could be fully represented and explored. Faculties’ responses to the outcomes of these initial audits varied. Most found the outcomes to be useful in helping them to identify

292 PART III

ImprovingAssessmentText2Proof.indd 292

11/11/13 3:36 PM

the strengths, weaknesses, opportunities and risks associated with their current practices and, therefore, to plan their response to the university’s challenge to improve the efficiency and effectiveness of assessment. As the ADE of the Faculty of Law observed: This degree of empirical data on assessment is unparalleled in the faculty’s history. It will provide a firm basis for ongoing discussions about efficient and effective assessment strategies and to benchmark the efficacy of changes made.

Some faculties, however, experienced the initial assessment audits as not being particularly useful and chose to pursue a development agenda for assessment that aligned more directly with their current strategies and priorities for learning and teaching development. The Faculty of Engineering for example, believed that it could best respond to the challenges of the Assessment Project by playing to its strengths of being an ‘innovator and technologically savvy’ and concentrating on technological innovations, specifically ‘leveraging existing resources such as Moodle LMS tools and added plug-ins’.

Approaches to implementing change in assessment practices A variety of approaches were used within faculties to effect change in their assessment practices. Not only do these approaches reflect the issues that faculties identified during their audits as being in need of reform but also the priorities, resources and state of readiness of particular faculties or schools to engage in the processes of innovation and development required. In those faculties where investment had previously been made to establish ongoing organisational and administrative infrastructure to support regular cycles of educational innovation, the time taken to deploy resources to audit practice, identify opportunities

Faculty responses to the Assessment Project 293

ImprovingAssessmentText2Proof.indd 293

11/11/13 3:36 PM

to improve practice and plan and organise the implementation of changes in response to these opportunities, was considerably shorter than in others. In some cases this also meant that the scale of change attempted was considerably greater. The approaches taken to implementing change in assessment practice can be characterised as either a program-level or a courselevel approach. However, these categories are not mutually exclusive and often a program level approach involved many of the same aspects or characteristics of course level approaches. The program level approach

As the name suggests, the program approach to improving assessment practices typically involved the review and revision of aspects of current practice that influence or define approaches to assessment in an entire program, including the structure of the curriculum, and in particular the relationships between assessment practices and the learning outcomes associated with each of the courses that make up the program. COFA, FBE and Law each adopted a whole-of-program approach in their efforts to respond to the challenges of the Assessment Project. Prior to the project, all three faculties had embarked on major programs of work aimed at reviewing and revising their current suite of programs. COFA had begun a program of work aimed at ‘rebuilding [its] programs in art, design and media from the ground up … developing an entire suite of new courses and new approaches to learning’. FBE had embarked on a major project to ensure a more distinctive student learning experience by reviewing and rethinking its curriculums to ensure that they were ‘globally focused, research-led, interdisciplinary and professionally accredited’. Law had begun a process to ‘fundamentally rethink its Law degree’. In each case, the arrival of the Assessment Project provided the ADEs and others with responsibilities for these existing wholeof-program initiatives, with the opportunity to use the lens of

294 PART III

ImprovingAssessmentText2Proof.indd 294

11/11/13 3:36 PM

assessment as a way of engaging staff in the task of reviewing and revising curriculums. Consideration of some of the fundamental principles of effective assessment, such as the principle of constructive alignment, meant that faculties and their staff, as part of their efforts to reconceptualise their programs and courses, had to give due consideration to a range of issues associated with quality curriculum and pedagogical design, including: ʶʶ clarity of program or course goals and purpose ʶʶ clarity, specificity and appropriateness of learning outcomes ʶʶ alignment between learning outcomes, learning activities and assessment tasks ʶʶ authenticity of the learning activities (including formative assessment tasks) ʶʶ appropriateness of the indicators and standards that would be used to assess learning ʶʶ the mechanisms that would be used to assess program level learning outcomes. Thus, central to these whole-of-program approaches to reviewing and revising assessment was a process of curriculum mapping to determine how and how well assessment tasks were or should be deployed within courses and throughout a program to assess achievement of both course and program level learning outcomes. Through the adoption of this whole-of-program curriculum development approach, each of these faculties not only hoped to ensure efficient and effective development of their new suite of programs and assessment practices but also of the capabilities of all their teaching staff in relation to the design, development, implementation and review of efficient and effective assessment practices. A whole-of-program approach was also adopted in the Faculty of Medicine where the changes that had been made to assessment practices over the last 10 years as part of the renovation of the undergraduate medical program to the undergraduate exercise physiology program and the three main masters programs in the

Faculty responses to the Assessment Project 295

ImprovingAssessmentText2Proof.indd 295

11/11/13 3:36 PM

School of Public Health and Community Medicine were extended. Whole-of-program changes to assessment practices were achieved in some faculties via a change in the faculty’s policy and procedural frameworks for assessment. In the ASB and Science, for example, where in some programs three-hour mid-year examinations had been a taken-for-granted requirement for the assessment of student learning, changes to the university’s policies concerning the length of examinations, coupled with a growing awareness within the faculty of opportunities to use other more appropriate or authentic forms of assessment, resulted in the faculty changing its policy and practices in relation to mid-year examinations. This had a significant impact on both the number and nature of midyear examinations deployed in all its programs and courses. The course level approach

In most faculties a course-based approach to improving the efficiency or effectiveness of assessment was adopted. By involving all course convenors, faculties believed that they could achieve more widespread change and improvement to their assessment practices. However, despite ‘the course’ being a common focus for attention, the strategies used by different faculties to renovate course level assessments were vastly different. FASS, for example, was concerned about the issue of assessment load on students. To address this issue it decided to develop a tool that would help staff to design efficient and effective assessment programs for their courses and assist the faculty to monitor and assure the appropriateness of the student workload associated with the proposed programs of assessment. By expecting all staff to develop and have their course assessment programs approved by this tool, the faculty believed it could maximise the impact of its efforts and thus ensure widespread improvement in students’ experience of assessment practice. In the ASB on the other hand, how assessment practice was to be improved was generally left up to individual course convenors.

296 PART III

ImprovingAssessmentText2Proof.indd 296

11/11/13 3:36 PM

This approach resulted in a much wider range of innovations in assessment practice that included: ʶʶ replacement of mid-semester examinations with other more authentic forms of assessment ʶʶ adoption of a rubric-based assessment strategy to assess ‘class participation’ and ‘group work’ ʶʶ use of problem-based learning and assessment strategies ʶʶ introduction of journals as a means of assessing team-based learning and development ʶʶ development of multiple-choice questions to assess higher order reasoning. Of particular note, were the achievements of one ASB staff member who introduced a piece of technology (ReView) into her course to: ʶʶ engage students more actively in the assessment of their own work ʶʶ facilitate the processes of marking, grading and provision of feedback to students ʶʶ improve the quality and consistency of feedback provided by the course’s multiple tutors ʶʶ reduce the amount of time that she was required to put into the training of tutors and the moderation of their assessments. Her success in making these improvements (she achieved a reported 30 per cent reduction in the total amount of time that she devoted to all aspects of assessment in her course) not only led others in her school and faculty to explore and/or adopt this technology in their own courses but staff in other faculties were also convinced of the merits of her approach. Indeed, COFA adopted this same technology to support the development and deployment of rubrics to support assessment and feedback practices in many of their own programs and courses. The development and use of technology to improve the

Faculty responses to the Assessment Project 297

ImprovingAssessmentText2Proof.indd 297

11/11/13 3:36 PM

efficiency and/or effectiveness of assessment in courses was also a strategy adopted by the faculties of Engineering and Science. The Faculty of Engineering determined that its response to the challenges of the Assessment Project to improve the efficiency and/ or effectiveness of assessment would be to use the support and resources available through the project to further the development and deployment of its Thesis Assessment Moodle course to manage its Fourth-Year Thesis course and to improve the level of consistency of marking across all of its schools. Further, by collaborating with the Faculty of Science in the development of calibrated peer review functionality in the Moodle Workshop tool, Engineering also fulfilled a desire to reduce the number of learning management systems that first year students need to use, and therefore develop competency in, from three to one. The development of this functionality within Moodle has provided all staff within UNSW with the option of introducing calibrated peer review processes into their curriculums as a way of increasing the level of student engagement in the processes of assessing the quality of their own and others’ work. In addition, through the integration of this new functionality into the Moodle core, it is now available to all users of Moodle worldwide. A common strategy for improving assessment practices in many faculties involved developing and implementing ways to make more transparent what was expected of students in the assessment tasks deployed within the faculties’ programs and courses. Various approaches were adopted, including: ʶʶ ensuring that assessment task requirements were clearly articulated in course outlines ʶʶ ensuring that the criteria, indicators and standards that would be used to judge achievement or performance of designated learning outcomes were documented and available to students prior to the deployment of any assessment task so that they might use these to guide and review their learning

298 PART III

ImprovingAssessmentText2Proof.indd 298

11/11/13 3:36 PM

ʶʶ developing and using rubrics to provide students with specific feedback in relation to their achievement or performance on designated learning outcomes against the criteria, indicators and standards specified. Typically technologies including ReView, GradeMark, and the newly developed Calibrated Peer Review functionality in Moodle’s Workshop tool, were deployed to enable this increased level of transparency and feedback.

Conversations, resources and training Regardless of the approach faculties chose to take to address the challenges of the Assessment Project, all approaches had one feature in common. Most faculties made an explicit effort to encourage conversations around assessment, involving staff, students and industry partners. Initially, many of these conversations involved a relatively small group of individuals in each faculty (the ADE, Heads of School and the faculty’s Assessment Project team members) and focused on analysing and planning responses to the outcomes of the faculty’s assessment audits. Later, these conversations were expanded via a series of faculty and school based assessment related events including regular colloquiums, workshops, seminars and discipline-specific assessment discussion forums to include all staff. These efforts were very successful in creating ongoing faculty-wide, and in several instances inter-faculty, conversations and networks related to assessment. They were critical in moving teaching, and in particular assessment, from being a ‘private’ activity to being the ‘public’ activity that it needs to be, if it is to be open to the possibility of improvement through collaborative, critical peer review and development. The amount of conversation and sharing of teaching and assessment experience across the campus increased notably as a result of the faculties’ efforts to encourage and enable this.

Faculty responses to the Assessment Project 299

ImprovingAssessmentText2Proof.indd 299

11/11/13 3:36 PM

The provision of resources and training for staff in relation to the design, development, implementation and review of assessment items and practices was also a feature of many faculties’ responses to the Assessment Project. Some developed and disseminated guidelines on different aspects of assessment. Others produced resources for students and staff on topics such as assessing group work and student participation, and providing and responding to feedback. Yet others created online databases of assessment literature to make it easier for staff to access current scholarship related to effective and efficient assessment practice. A particular focus for training and development in relation to assessment was training for sessional staff. In some faculties this was addressed through the inclusion of sessions on assessment requirements and expectations in the induction and orientation programs that faculties ran for sessional staff. In others, handbooks and a variety of other printed resources were produced and disseminated to address this need.

Issues in initiating and implementing change in assessment practices Effecting educational change is always complex and a range of leadership and management challenges must be addressed in each phase of any change process. In this section, the leadership and management challenges associated with faculties’ efforts to initiate and implement changes to improve the efficiency and effectiveness of assessment practices at UNSW are discussed. Issues in the initiation of change

From a leadership perspective, the main challenge associated with the initiation of any change process is that of engaging the hearts and minds of all relevant stakeholders. It is fair to say that initial responses to the challenges of UNSW’s Assessment Project were not universally positive. It took staff in several faculties some time to see that they and their students might benefit from engaging

300 PART III

ImprovingAssessmentText2Proof.indd 300

11/11/13 3:36 PM

in the project. In some instances, staff resented the university requiring faculties to participate in the project. Some ASB research staff, for example, asserted that they had minimal time available for assessment training and development. Others were reluctant to invest time in devising innovative assessment tasks when ‘research output, not innovation in teaching, is what is rewarded’. In many instances, lack of confidence to try new technologies lay at the root of staff resistance to engaging in their faculty’s efforts to improve assessment practices. In others, it was staff scepticism about the university’s real intentions in establishing the project that led to their reluctance. In COFA, for example, staff suspected that the university was really only interested in finding ways to cut costs, not in improving the effectiveness of assessment. In other faculties, such as the Faculty of Science, staff saw increasing efficiency and improving effectiveness as fundamentally incompatible goals. While the approaches taken by faculty leaders to address these challenges varied, they were informed by a common belief: that people are reluctant to change unless they have a reason or an incentive to do so. As a result, faculty leaders believed that central among their efforts to initiate change in assessment practices needed to be processes by which they engaged staff with the reasons and incentives to take the necessary steps to challenge their current practices and to improve them. Many faculty leaders relied on two things to engage staff with the project. The first was the use of assessment audits as a means to describe, reflect on and baseline current practice. In the Faculty of Science, for example, the audit process (including the post-audit discussion with the faculty) is reported to have ultimately persuaded most staff of the value of the exercise and of the faculty’s freedom to engage in the project in ways that best suited its needs and priorities. The second strategy was to leverage familiar existing practices, such as accreditation processes, to respond to the challenges of the Assessment Project. At COFA, staff doubts were ultimately laid to

Faculty responses to the Assessment Project 301

ImprovingAssessmentText2Proof.indd 301

11/11/13 3:36 PM

rest by the fact that the project shared some goals with the faculty’s own Program Simplification Project. To ensure widespread engagement with the Assessment Project, some faculties, notably the ASB, used a targeted dissemination strategy by making a deliberate effort to ensure that everyone in the faculty, school, or program concerned received a copy of their audit outcomes. Furthermore, the ADE set up meetings at which staff would come together to discuss and reflect on the contents of their report and, if they wished, to challenge or query the findings. Whenever possible, the university’s external assessment consultant attended these meetings in order to assist staff to challenge the taken-for-granted assumptions embedded in their practices. She provided what the ADE in Science has described as ‘high-level expertise but no vested interest’. She could dispassionately point out, for example, that the profile of assessment in a program did not change in the course of a five-year degree and that one or two types of assessment task, that were not necessarily appropriate to assess stated learning outcomes, were being relied on to assess everything, when other types of assessment might be more appropriate. It is worth noting that while this dissemination strategy proved beneficial in engaging staff in the activities of the Assessment Project in the ASB, in other faculties, where no such deliberate efforts were made, the findings of these audits often went no further than the ADE or the Heads of School and relatively little engagement of staff throughout these faculties was achieved. Part of the strategies that many faculties used to engage staff in the activities and work of their Assessment Projects focused on helping staff to feel capable of the changes expected of them. To this end, many faculties committed to ensuring that people were provided with the information and support they required to undertake the critical scholarly work involved in reviewing and redefining assessment practice. In these faculties, particular emphasis was placed on ensuring that consultants and other individuals were available who could provide the necessary informa-

302 PART III

ImprovingAssessmentText2Proof.indd 302

11/11/13 3:36 PM

tion, advice and guidance people needed to perform this work. Having staff appreciate why and how current assessment practices needed to change assisted faculty leaders to engage staff in the Assessment Project but other strategies needed to be deployed to maintain their interest and commitment. In most faculties this was achieved by developing and maintaining opportunities for staff to talk about assessment, to make sense of the issues, plan their responses, discuss how they would go about implementing those responses or changing their practices, and to monitor, evaluate and keep abreast of their achievements as they progressed. Some faculties chose to align their annual internal awards for outstanding teaching or contributions to student learning with the faculty’s goals and priorities for change in assessment practices – thereby providing incentive, recognition and in some cases reward for staff to focus their efforts in this way. Faculty Assessment Project leaders used a variety of other policy instruments to engage staff with the project. Mandates requiring people to change and respond to new policies, procedures or rules were important in some faculties. The most notable example of the use of a mandate to effect change in practice occurred within FASS, where all staff were required to use the faculty’s newly developed assessment tool to design and have approved the proposed assessment programs for each of the courses for which they were responsible. Inducements in the form of small grants to support innovation in assessment were also used by some faculties. From a management perspective, the challenges associated with initiating the faculties’ responses to the Assessment Project related to how their response would be planned, coordinated and supported from a financial and resource perspective. Central to resolving these challenges was determining the structural and organisational arrangements that the faculty would require. The dilemma for faculty Assessment Project leaders and managers was to determine whether existing structures and organisational entities such as working groups or committees, or particular faculty or

Faculty responses to the Assessment Project 303

ImprovingAssessmentText2Proof.indd 303

11/11/13 3:36 PM

school leadership or management roles, might take on responsibilities within the project, or whether such existing groups or roles needed to be supplemented for the period of the project with new groups or roles. The principal questions that needed to be resolved were: ʶʶ How would the project be governed and managed within faculty? ʶʶ Who would be involved? ʶʶ What would be their responsibilities? ʶʶ How would project arrangements align with existing governance and management structures within the faculty? Part of the challenge of resolving these issues lay in: ʶʶ the current lack of clarity in the relationships between the roles and responsibilities of ADEs, Heads of School, Program Coordinators and Course Convenors in relation to educational innovation and change in most faculties ʶʶ the political tensions that arise when one vests authority in, or shifts authority from, new and existing positions within an organisational structure. The lack of clarity in current role descriptions of the ADEs, Heads of Schools, Program Coordinators and Course Convenors with regard to educational innovation and change often meant that a significant amount of time needed to be spent negotiating what the appropriate contributions of those in these roles might be in relation to the Assessment Project. Each faculty resolved these tensions differently, with many choosing to vest all responsibility in the ADE, while others shared the responsibilities across the full range of positions with responsibilities for learning and teaching development. In faculties where the ADE was vested with all responsibility for determining and delivering the faculty’s response to the Assessment Project, tensions sometimes arose. At the root of these tensions lies

304 PART III

ImprovingAssessmentText2Proof.indd 304

11/11/13 3:36 PM

the fact that ADEs do not have the authority to determine how resources are deployed within schools and consequently they had to find ways to influence and work in collaboration with Heads of School to ensure that the work of the project was completed. These attempts to influence were in some cases perceived by faculty staff to be inappropriate interference in the operations of independent schools. Thus, finding strategies to resolve these tensions was a critical task for ADEs who found themselves in this position. On the other hand, there is evidence to suggest that where faculties shared responsibilities broadly among the ADEs, Heads of School, Program Coordinators and others in formal leadership and management roles for learning and teaching, broader staff engagement in the faculty’s Assessment Project activities was achieved. In ASB, for example, where the ADE involved the Heads of School in reviewing and reflecting on the outcomes of the school assessment audits and where Heads of School subsequently engaged their staff in reviewing the outcomes of this audit for their school, both the level of awareness and the level of engagement in activities to improve assessment were comparatively high. Many changes associated with both the process and the outcomes of the Assessment Project challenged longstanding values, assumptions and beliefs underpinning current business practices relating to assessment and other learning and teaching matters in faculties or schools. Leaders and managers of the change processes needed to develop appropriate strategies to address these cultural differences and to help individuals and groups appreciate that the changes proposed would benefit the faculty, the schools and their students in the long term. Resourcing the project’s activities was a major challenge for most faculty-based project leaders and managers. This challenge partly arose from perceptions in some faculties that it was the university’s responsibility to find the resources necessary to effect the required changes. However, the federated nature of UNSW and the one-line budget approach taken to funding each faculty were

Faculty responses to the Assessment Project 305

ImprovingAssessmentText2Proof.indd 305

11/11/13 3:36 PM

predicated on an institutional belief that faculties would use these devolved budgets to support both the delivery and the further development of their educational programs. Thus, resources necessary to support implementation and institutionalisation of any changes required or identified as a necessary part of the project were to be found in the faculties themselves. This tension became particularly apparent when the strategic funding provided by the university to support the faculty LTF position came to an end. In those faculties where this strategic funding was understood to be an opportunity for the faculty to develop its own internal capacities and capabilities for ongoing leadership and management of educational innovation, the withdrawal of funding did not become an issue. For example, the Faculty of Medicine just got on with things. However, where there was a belief that the university was responsible for maintaining these positions, or that ongoing investment in the development of learning and teaching was not, or did not need to be, a faculty priority, when the funding supporting the faculty’s LTF disappeared, the faculty’s capacity to lead and manage educational reform was diminished. Issues in the implementation of change

The leadership and management strategies used to implement faculties’ responses to the Assessment Project varied enormously, reflecting different aspects of the contexts within which change was being attempted (organisational, social, political, cultural and historical); different levels of staff knowledge, skills and capabilities in relation to the changes required; and different levels of staff resistance to proposed changes.

The impact of context on efforts to change assessment practices As might be expected in large multi-disciplinary faculties, cultural differences between their constituent schools and disciplines often made progress towards achieving the faculty’s desired outcomes

306 PART III

ImprovingAssessmentText2Proof.indd 306

11/11/13 3:36 PM

for the Assessment Project difficult. Differences in the values and beliefs that underpin traditional assessment practices within each school or discipline made it necessary to negotiate, justify and make meaning of the faculty’s chosen response in each of these contexts. As the ADE in COFA found, it was difficult to achieve consistent implementation of the faculty’s desired response to the Assessment Project throughout the faculty because of ‘entrenched assessment practices [and] specific issues arising from the faculty’s disciplinary mix … leadership and capacity for change’. In the FBE, ‘some staff members questioned the need to review their own assessment practices, critique their courses and … make changes to their programs’. Indeed, ‘some staff simply questioned the need for change!’ In this context the FBE ‘education team worked diligently to allay fears relating to changes that were occurring in the faculty and sought to promote understanding of the time and energy needed for curriculum change’. They found that ‘some staff were more receptive to this approach than others’. Both COFA and UNSW Canberra found that the project’s implementation phase brought into focus historical and ongoing issues they had faced when attempting to implement any kind of systematic faculty-wide learning and teaching reform. The imperatives of the project forced faculty leaders and managers to confront and deal with these issues and devise strategies that would have ongoing benefits in all aspects of learning and teaching development, and not just the assurance and improvement of quality assessment.

The impact of staff knowledge, skills and capabilities on efforts to change assessment practices A general lack of staff knowledge in relation to learning and teaching in general and assessment of learning in particular was identified by the ADA of Engineering as having prevented the faculty from being able to require all academics to deliver improvements in assessment in all courses. Indeed, this was given as the primary

Faculty responses to the Assessment Project 307

ImprovingAssessmentText2Proof.indd 307

11/11/13 3:36 PM

reason why the faculty concentrated on developing and deploying technological solutions to identified assessment-related issues in first and fourth year courses. Removal of the Educational Developer position that had previously supported staff to innovate and change their programs and courses due to budget constraints, was another reason put forward to explain why the faculty could not pursue a ‘broader assessment upgrade’. This issue was not restricted to the Faculty of Engineering. Indeed, all faculties reported that a key element of their approach to implementing their response to the Assessment Project was the provision of ongoing professional development of their staff. In most cases this development focused on issues related to the principles that underpin effective assessment practice and the various strategies that can be employed to embed these principles in the design, development, implementation and review of assessment programs, items and processes. As described above in relation to initiating assessment change, the ongoing provision of high quality support and resources to develop the knowledge, skills and capabilities of staff in relation to assessment was found to be critical also to the efforts of faculty leaders and managers to implement change in assessment practices.

The impact of staff resistance to efforts to change assessment practices Undoubtedly one of the greatest challenges that faculty leaders and managers faced in relation to the implementation of their faculty’s response the Assessment Project was staff resistance. At UNSW Canberra, School of Business staff needed some convincing that course outline assessment statements failed to constitute an assessment policy, and project staff encountered resistance in several programs to collegial discussion of assessment across majors, disciplines and programs, aimed at linking assessment to graduate capabilities as well as individual learning outcomes. ASB staff were and remain resistant to formative assessment,

308 PART III

ImprovingAssessmentText2Proof.indd 308

11/11/13 3:36 PM

regarding summative assessment as more appropriate to their faculty generally. FASS found it difficult to get some staff to complete time-ontask assessment diaries. One sessional staff member refused on the basis that it was not in her job description and she was not being remunerated for it. When it came time to implement their online assessment tool ‘some staff simply entered their existing assessment suites and tried to manipulate the modifier to get the tool’s “green light”, rather than using the tool to inform and improve their assessment decisions’. Law took a pre-emptive step to prevent staff regarding its program mapping exercise as ‘a negative compliance process’ by having project staff meet one-on-one with course convenors to guide them in translating their vision for their course into learning outcomes to which assessment could be aligned. Each faculty had its own ways of dealing with staff resistance. For example, Law operated pre-emptively: in improving the communication to students about the nature and expectations of assessment tasks, it standardised forms of assessment and provided generic rubrics, but left it to staff ‘to demonstrate the different emphasis given to the criteria in different forms of an overall type of assessment’, being concerned to resolve the issue ‘in a way that respects the professional ability of teachers to develop their own adapted forms of assessment’. Ongoing open dialogue about the process was the solution to the resentment felt by Science staff at the existence of the project itself. The project, according to some in the Science Faculty, was based on an erroneous belief that ‘practice in assessment [within the faculty] was somehow wrong and needed to be fixed’ and therefore it should be resisted.

Faculty responses to the Assessment Project 309

ImprovingAssessmentText2Proof.indd 309

11/11/13 3:36 PM

Other factors influencing faculties’ efforts to implement change in assessment practices Lack of technology and IT related infrastructure to support faculties’ efforts to renovate their assessment practices were identified as major challenges by some faculties. Engineering, for example, would have liked access to an electronic data collection and management tool to support curriculum and assessment mapping but no central system was available. The ASB, COFA and FBE would have liked a wider variety of assessment tools to be integrated with the university’s learning management system but had to be content with stand-alone applications that require high levels of faculty expenditure to support and maintain over time. ASB staff wanted wider access to lecture capture technologies and were concerned that UNSW did not have the underlying physical and IT infrastructure deployed widely enough to meet their needs. Many faculties that had invested heavily during the previous five years in the development of adaptive or smart tutorials were concerned that the platform upon which these had been developed was not part of the university’s enterprise IT platform and therefore could not be relied upon to be available to support their future formative assessment needs, despite its excellent performance during the period of its development. A number of faculty Assessment Project leaders also experienced difficulty obtaining the level of expert advice and support that they required from the central Learning and Teaching Unit (LTU). This was particularly the case during the initiation phase of the project but it also had implications during implementation, due the loss of confidence among staff in the level of support that the LTU could provide. FASS was particularly affected by this. Initially the faculty was keen to allow its assessment audit process to be entirely conducted by the LTU. However, some schools were most unhappy with the quality and usability of the reports that resulted. Only part of this dissatisfaction can be attributed to the schools’ resentment of outsiders making judgments about the nature and value of

310 PART III

ImprovingAssessmentText2Proof.indd 310

11/11/13 3:36 PM

their assessment practices. Indeed, the experience was instrumental in clarifying that the LTU, at that point, did not have the expertise to fulfil the project imperatives satisfactorily. For several faculties, most notably FASS, the initial audit phase of the project did significant harm to the LTU’s credibility. According to the faculty’s ADE, the reports of these audits ‘resulted in an alienation from the project for many academic staff that was very difficult to correct’. Science also noted that the reporting process gave rise to staff disquiet. However, in the course of ‘correcting’ the reports with the project team, staff came to value the opportunity that the process had afforded to reflect more deeply on their assessment practices.

Ongoing assessment activity Despite these difficulties, the influence of the Assessment Project is widely acknowledged to be ongoing, with all faculties indicating by their actions that they have been motivated to pursue project related goals beyond the project end date. For example, Medicine is committed to further developing standards-based assessment, intends to extend the assessment practices it initiated in the three focus postgraduate programs to other postgrad programs, and has scheduled workshops in 2013 on designing assessment to minimise plagiarism. UNSW Canberra continues to be interested in the possibilities for technologically assisted assessment and Engineering is determined to continue working on rubrics and standards-based marking and to embed the improvements so that future staff changes cannot erase them. The FBE hopes for ‘more innovation … in course delivery and the building-in of a quality assurance loop’. As the ADE in the FBE observed: As a result of the UNSW Assessment Project, the [F]BE is still offering action research grants to its staff and … conducting research into its own assessment practices, [in its quest]

Faculty responses to the Assessment Project 311

ImprovingAssessmentText2Proof.indd 311

11/11/13 3:36 PM

for innovative assessment solutions to [F]BE’s design-based programs. The [grant] and its outputs has increased interest in assessment as an area of … research, added modestly to the [F] BE’s research quantum and should provide a cycle of innovation in assessment practice.

In the ASB, ‘staff continue to develop creative assessment tasks as a component of curriculum mapping for [assurance of learning]’ and considerable effort has been made to build on their Assessment Project initiatives. This includes: ʶʶ embedding AOL data collection, analysis and interpretation (including the use of ReView software from S1, 2013) into routine assessment practice in a wider range of courses ʶʶ streamlining the articulation of program learning goals and program learning outcomes ʶʶ refining assessment rubrics ʶʶ updating and documenting the faculty’s roles, responsibilities and processes in relation to assessment ʶʶ continuing to engage and build the capabilities of staff in relation to assessment ʶʶ focusing on continuous improvement in program-level assessment.

Conclusion Faculty responses to the Assessment Project exhibit both fundamental similarities and significant differences in approach. The similarities were most apparent in the initiation phase, where all faculties came to acknowledge some need for assessment review and reform and to agreed undertake data collection, stocktaking and reflection to determine the components of that reform, with the guidance of the central project team. Once this initiation phase was complete, faculties’ responses diverged as each formulated a response that served its unique culture and structure, met its

312 PART III

ImprovingAssessmentText2Proof.indd 312

11/11/13 3:36 PM

current needs and in several cases complemented its efforts in other program or curriculum review and renewal processes. Having detailed the major alignments and divergences among the faculties in the actions they took in responding the project’s challenges, and the major issues the project faced in its initiation and implementation phases in the faculties, we now step back and look at what the project delivered to UNSW as a whole.

Faculty responses to the Assessment Project 313

ImprovingAssessmentText2Proof.indd 313

11/11/13 3:36 PM

14

Institutional outcomes of the Assessment Project Richard Henry, Stephen Marshall and Prem Ramburuth

Chapters 4 to 12 detailed the processes and the outcomes for faculties and schools during UNSW’s three-year Assessment Project. Chapter 13 provided an overview of individual faculty processes and achievements, indicated some of the many challenges faced by faculty and school level project leaders and managers and described some of the strategies that were deployed to address these challenges. This chapter summarises and reflects on the institutional level outcomes of the Assessment Project – both those outcomes that relate directly to the project’s original goal to ‘reduce academic workload by improving the efficiency and effectiveness of assessment practices’ and others of significance to UNSW.

Outcomes related to the efficiency and effectiveness of assessment All faculties achieved tangible improvements in either the efficiency or effectiveness of their assessment practices. In terms of efficiency, the staff and student workload in relation to assessment has been significantly eased as a result of the project. Many improvements came about as a result of the reconfiguring (or indeed the drafting

314

ImprovingAssessmentText2Proof.indd 314

11/11/13 3:36 PM

for the first time) of whole-of-program assessment profiles, or by the introduction of standardising practices such as the use of assessment rubrics. Others were achieved through reductions in the numbers of assessment tasks. But perhaps the most dramatic efficiencies were gained through the deployment of technology to support various aspects of assessment, including submission, marking and the provision of feedback. Student self and peer assessment are extremely valuable in nurturing the development of self-regulated learners; however, they had not been used widely at UNSW because of the time that it took staff to monitor and moderate the grades that students awarded. The development and deployment of functionality to support this activity in Moodle, the university’s learning management system, not only made widespread use of this type of assessment and learning activity possible but also resulted in additional efficiencies in terms of staff time. Faculties also achieved tangible improvements in the effectiveness of their assessment practices. Their desire to bring about these improvements often emerged from their reflection on the data about their current assessment practices in light of the advice found in the scholarly literature on effective assessment. That there were systematic ways to arrive at qualitative judgments as well as quantitative marks when assessing students’ work was a revelation to some academics, and the relief of being involved in a faculty-wide conversation about assessment instead of isolated in a private assessment dialogue with one’s students was almost palpable in some responses to the project’s activities. It is clear that this move from assessment as a private matter to assessment as a public activity made some staff quite uncomfortable but the benefits to both staff and students were tangible and are discussed in the following pages.

Institutional outcomes of the Assessment Project 315

ImprovingAssessmentText2Proof.indd 315

11/11/13 3:36 PM

Reductions in the quantity and cost of formal written examinations

The most obvious and immediately recognised efficiency gain resulting from the Assessment Project was the overall reduction in the volume of formal examinations included in assessment suites across the university. The UNSW approach to examinations has been that the cost of scheduling and running end-of-semester exams is borne centrally but faculties need to organise and pay for mid-semester examinations. An early outcome of the assessment audits undertaken at the outset of the project was the revelation that faculties were spending significant and possibly excessive amounts of time conducting formal examinations. This was despite the fact that the Academic Board for several years had questioned the suitability of lengthy examinations for assessing many learning outcomes. Indeed, the board had previously resolved that, when it was pedagogically appropriate, exams should be no more than two hours long rather than the three hours that had traditionally been the practice in many parts of the university. In S2, 2007, two-thirds of the 539 end-of-semester written examinations were longer than two hours (5 per cent were 2.5 hours long and 61 per cent were 3 hours long). From 2009 onwards, less than one-third of examinations were more than two hours long. Anecdotal evidence had indicated that many academics equated academic rigour with extensive examination. The Assessment Project, while it had no impact on the total number of examinations being conducted (1100 in 2007, 1124 in 2012) and could not claim to have accelerated the move from three- to two-hour exams, did address the 7–8 per cent of assessments that were one, 1.5 or 2.5 hours long, which had made scheduling and running examinations costly and very complicated. In 2011, as part of the Assessment Project, the Deputy Vice-Chancellor (Academic) (DVCA) proposed to the Academic Board that central support be withdrawn for exams that were

316 PART III

ImprovingAssessmentText2Proof.indd 316

11/11/13 3:36 PM

neither two nor three hours. These could be conducted and would be included in the exam timetable, but individual faculties would be required to pay for venues, invigilation and other administrative costs. As a result of the board’s acceptance of the proposal, these exams disappeared completely and all faculty end-of-term written exams became either two or three hours long. The positive effects of this change on administration were dramatic. The time spent by the examinations team developing the draft exam timetable went from 4.5 weeks to 2.5 weeks. Finalising the draft timetable now takes a couple of days rather than a couple of weeks. Using exam space effectively and producing a student-friendly timetable within the designated scheduling timeframe have both become much easier. The student experience of exams was also improved. UNSW, being limited in the number of suitable large venues it has at its disposal, has generally had to hire external venues for examinations at significant cost (up to $1 million per annum). One means of effectively using exam space had been to hold exams of mixed duration at the same time, in the same venue. Students who were sitting a three-hour exam in the same venue as, say, students sitting a 1.5hour exam, would be disturbed at the mid-point by supervisors making multiple announcements 10 minutes before the end of the 1.5-hour exam, by the collection of papers at the end of the 1.5hour exam and by students leaving. Students had complained often about mixed-duration arrangements, which have now been eliminated from the exam timetable. Their elimination, as well as the reduced disruption of longer exams, has led to a reduction in the number of supervisors required. Quality control has also improved, with tasks such as identity checks, name slip checks and checking paperwork now being easier for the supervisors to carry out. The Assessment Project also addressed the matter of mid-semester examinations. These exams were managed by schools and faculties; they were neither centrally overseen, nor had any data been collected about their exact numbers. However, students had

Institutional outcomes of the Assessment Project 317

ImprovingAssessmentText2Proof.indd 317

11/11/13 3:36 PM

complained about the burden of such assessments in many areas, especially in the Australian School of Business. Consequently, the project team worked with the ADE in the ASB to review the volume of assessment in this area and the rationale behind it. The initial messages that project staff received from the faculty were clear: it was totally inappropriate for the university to interfere in academic matters; the mid-semester exams were necessary to test student learning; feedback from them to students was instrumental in helping them perform better in the final exams. Project staff then drew the faculty’s attention to course evaluation data gathered from students, which revealed that for some courses involving mid-semester examinations students had received neither marks nor any other feedback until after they had completed their end-of-semester exams. This was sufficient to engage the faculty in meaningful, significant and productive discussion about the quality and quantity of assessment. As a result, the number of mid-semester exams decreased significantly and the overall quality of students’ experience of assessment improved. This reduction in the number and length of examinations was repeated in several faculties. As a result, the project has produced significant efficiencies through the reduction of examinations in terms of associated reductions in venue hire and invigilation costs. In the ASB alone, this represented a saving of $30,000 per annum for venue hire for each very large group exam. Increased variety of assessment tasks

A second institution wide outcome of the Assessment Project related to the quality or effectiveness of assessment was a general trend to increase the variety of assessment tasks employed throughout a program of studies. The initial audits of assessment practice in each faculty revealed that assessment tasks being set in some programs varied little from first year to final year, despite the fact that very different learning outcomes had been articulated for introductory courses to those articulated for later year courses.

318 PART III

ImprovingAssessmentText2Proof.indd 318

11/11/13 3:36 PM

In the Faculty of Science for example, the project’s assessment consultant observed that in two schools, ‘there is no clear differentiation in the complexity and challenge of assessments across the three levels [of their programs] … The same forms of assessment occur over the [programs’] first three years’. In two other schools it was observed that, ‘there is some differentiation in the complexity and challenge in assessment across the four levels within the schools’ [programs]’ but ‘this development is not gradual … [being most] obvious at the third level’. These observations were in contrast to those for a fifth school, for which the report was very positive: There is a clear and marked difference in the complexity and challenge in assessment across the five levels within the school[’s programs]. Generally assessment progresses from being dominated by examinations in the earlier years to clinical and practical work in the last two years. The complexity of the examinations appears to increase over the years, as do the assessment tasks overall.

As a result of the Assessment Project, efforts were made in every faculty to increase the nature and range of assessment tasks within their programs as well as to ensure that the demands and complexity of these tasks increased progressively throughout their programs from year one. A general increase in the variety of assessment tasks employed, including an increase in the use of problem-based learning, teamwork exercises, authentic assessment tasks and reflective work was observed in many faculties. Improvements in the quality of examinations

As reported earlier, the excessive and inappropriate use of mid-semester examinations was a particular issue in multiple schools and faculties. The following comments from the audit report of one postgraduate program were typical:

Institutional outcomes of the Assessment Project 319

ImprovingAssessmentText2Proof.indd 319

11/11/13 3:36 PM

Constructed well, mid-term examinations provide students with motivation to engage in study and valuable insights into their progress. The outcomes of a mid-term examination should indicate to students what modifications are needed to improve their progress. Without this added element they are an expensive strategy to merely produce a grade. There are other less costly means for students and teachers to gain this diagnostic information. The same can be accomplished by using in-class or self-grading online tests.

As a result of the work done in faculties and schools to review their mid-semester examination practices not only did the university benefit from the significant reduction in overall number and cost of such examinations, but the quality of those remaining and the value of the feedback students received were both significantly improved. The quality of final examinations also often came in for criticism from the external assessment consultant. The following quotes clearly point to the nature of the problems that were observed. Final examinations are one of the university’s quality assurance tools. When they are well constructed they can provide summative evidence of the extent of students’ learning and their ability to analyse, apply and critique. Some exams merely assess low order thinking and recall and do not provide assurance of students having developed higher order thinking and other desirable graduate attributes. They also do not offer students the opportunity to respond to and act on the results. The weighting on examination assessment makes them high stakes activities for students and thus every attempt must be made to ensure that they are soundly constructed and assess more than knowledge transmission and recall. It is also important to ensure that they are accessible to students for whom

320 PART III

ImprovingAssessmentText2Proof.indd 320

11/11/13 3:36 PM

English is their second language. Considerable development has occurred in designing examinations that assess high order learning and that eliminate chance contributing to student success. Because of the high stakes nature of final examinations they should involve a sound moderation process and the outcomes should be representative of a student’s progress throughout the course.

In response to these observations, considerable effort was made in a number of faculties to ensure that final examinations were effectively moderated to ensure that they appropriately assessed course learning outcomes, including higher order thinking and graduate capabilities. As a result of these efforts, the quality of examinations was significantly improved. Improved assessment of class or tutorial participation

A third area where many of the observations made in faculty assessment audits overlapped was the assessment and grading of class or tutorial participation. Over time, in a number of faculties, staff had developed the practice of allocating marks to students’ class or tutorial participation based solely upon their attendance rather than on their learning from, and/or contributions to, the learning activities undertaken in these contexts. While staff generally have good reasons for assessing participation, these are often tacit and opaque to students, and they ‘feel coerced by such requirements’. As UNSW’s external assessment consultant observed in one faculty’s audit report: Assessing student participation in tutorials provides an opportunity to signal to students the importance placed on being prepared for the tutorial sessions and of engaging in class discussion. Tutorial activities should cause students to use and act on the information they have learnt in their preparation, assisting durable, deep learning to occur. However, students

Institutional outcomes of the Assessment Project 321

ImprovingAssessmentText2Proof.indd 321

11/11/13 3:36 PM

often lack understanding of the tacit reasons for assessment of participation and thus feel coerced by such requirements.

Various aspects of class or tutorial participation related to graduate capabilities and/or professional skills are often assessed as part of the assessment of class or tutorial participation. However, a number of issues were identified in assessment audit reports that indicated that where this was occurring it was not always transparent to students what was being assessed or how. As the external assessment consultant observed: Assessing participation in tutorials should be reserved for clearly articulated and relevant purposes, for example, the development of specific communication and negotiation skills … [or the] develop[ment] of students’ beginning professional capabilities. However, for this to be conducted fairly … course outlines need to articulate precisely just what these professional skills are … [and] students … need to know these would be observed and assessed.

When these intentions are made transparent, students ‘find greater relevance in engaging in tutorial sessions and feel less coerced’. Deliberate efforts in a number of faculties to address the transparency of what was actually being assessed when class or tutorial participation was included in an assessment program not only resulted in improved student satisfaction with such activities but also enabled students to better plan their approach and contributions in these particular learning environments. Better targeted assessment tasks

One of the achievements of the Assessment Project was an increased awareness of the need for more targeted forms of assessment tasks. For many, the Assessment Project represented the first time staff had been introduced to the principles of ‘constructive alignment’

322 PART III

ImprovingAssessmentText2Proof.indd 322

11/11/13 3:36 PM

and the requirement that assessment tasks and the criteria and standards that are used to assess students’ achievement should all align with the learning outcomes articulated for the course or program. Consequently, in many audits it was found that there was little relationship between the learning outcomes articulated for a program or course and the tasks intended to assess them. By the end of the project, faculties and schools had made significant efforts to address this issue. These included articulating in course outlines: ʶʶ the learning outcomes associated with the course ʶʶ which of these learning outcomes (including graduate capabilities) would be assessed by each assessment task included in the course’s assessment program ʶʶ the criteria and standards that would be used to judge achievement and/or performance of these learning outcomes and the development and deployment of a wider range of authentic assessment tasks. The lack of variety and progression in assessment tasks throughout some programs meant that UNSW was not effectively assessing program-level learning outcomes. The focus was generally very much on course-level outcomes. Faculties began to address this issue by introducing more authentic assessments at course level that focused on the assessment of both discipline-specific learning outcomes and graduate capabilities. However, it became clear that while these efforts had indeed moved the university forward in relation to the assessment of program-level learning outcomes, they were insufficient to fully address the problem. Consequently, in some faculties, considerable effort was made to investigate other ways of doing so. The two most commonly investigated methods were the introduction of capstone courses into programs and/or the use of student portfolios in which students accumulate a body of evidence from the individual courses that they study as part of their program, that demonstrates their achievement of the desired

Institutional outcomes of the Assessment Project 323

ImprovingAssessmentText2Proof.indd 323

11/11/13 3:36 PM

learning outcomes for the program as a whole, including the university’s desired graduate capabilities. Increased student participation in the assessment process

Audits of past and current assessment practices also indicated that often students were the subjects of assessment as opposed to participants in the assessment process. It was the teacher’s agency that was responsible for the assessment of students’ work and for the provision of feedback rather than the students themselves being active in the assessment of their work and in determining the ways in which their work needed improvement to meet the standards required. The faculties of Science and Engineering in particular were concerned that students play a more active part in assessing their own work. They sought to improve the effectiveness of assessment by teaching students what to notice in relation to the deficiencies and strengths of their own work and the work of others. At the same time, self and peer assessment would reduce the burden on staff to provide individual formative feedback to students, thus also affording efficiency gains. As reported earlier, these faculties chose to effect this change by collaboratively developing extra features to incorporate into the Workshop tool that already existed within the Moodle learning management system (LMS). These integrations, coupled with the reviewer assignment comma separated value upload option, and a plug-in giving Moodle course participants access to the WebPA system already in use in undergraduate courses, substantially increased the range of online support within Moodle for self and peer assessment, and for group-related activities. Interest in getting students more engaged in the assessment of their own and others’ work was also evident in the ASB, COFA and FBE where, as reported in Chapter 13, attention was directed to the use of two other technologies to support this activity: ReView and GradeMark. Collectively, the efforts of these faculties significantly raised the university’s awareness of the benefits for both staff

324 PART III

ImprovingAssessmentText2Proof.indd 324

11/11/13 3:36 PM

and students of greater levels of student participation in assessment processes.

Marking and feedback Marking and the provision of feedback to students were also identified in assessment audit reports as in need of improvement. A number of schools and faculties chose to address these with the twin aims of improving feedback quality to enhance the effectiveness of the assessment process for students and of finding more efficient ways to provide feedback so as to reduce the workload for staff. The identification of ReView, GradeMark and the Moodle LMS as tools that might be used to mark students’ work and provide better feedback is worthy of comment. ReView is a web-based system, developed at the University of Technology Sydney, which allows automated marking of criteria-based assessments and the provision of feedback to students. It was first tested at UNSW by a number of academics in the ASB. Students were delighted with the clarity of what was expected of them and with the quality and quantity of feedback they received. Staff were equally enthusiastic. They recognised that the quality of the feedback they had produced was much improved and were happy with the decrease of 25 to 30 per cent in marking time that ReView delivered. Project staff discussed the ASB’s experience with all the ADEs and as a result ReView was implemented in COFA, with similar gains in both quality and efficiency. At the time, UNSW was starting to migrate from the Blackboard to the Moodle LMS. The faculties of Engineering and Science, as well as UNSW Canberra, were converting to Moodle. Because ReView was compatible with Blackboard, but not with Moodle, staff in Science and Engineering took up the Moodle-compatible GradeMark, an online assessment and feedback system linked to the Turnitin tool already used at UNSW for its originality checking functionality. Staff from the

Institutional outcomes of the Assessment Project 325

ImprovingAssessmentText2Proof.indd 325

11/11/13 3:36 PM

two faculties further developed the tool in Moodle, with an excellent outcome. The upgraded marking and feedback tool developed at UNSW Kensington appeared to be much more flexible than ReView, and staff have moved to this system. These responses to the marking and feedback challenges identified in audit reports have provided opportunities for students to be active in the assessment of their own work and therefore to develop a better understanding of the indicators and standards of quality in that work. The adoption of online tools, particularly when they have involved rubrics, marking guides and automated feedback, have provided a foundation for new tutors to more efficiently develop their understanding of the indicators and standards of quality developed for particular courses and of how these indicators should be applied to the assessment of students’ work. Lecturers-in-charge have benefitted from the reduction in time spent training tutors in marking regimens, moderating the marks allocated by different tutors and providing feedback on specific learning outcomes being examined by a particular assessment task, as the technology offers opportunities for collaborative moderation that would otherwise be very cumbersome to administer. Perhaps the strongest proof of the efficacy of online administration of assessment is the successful and widespread uptake throughout the Faculty of Engineering of the Thesis Assessment Moodle course, which was developed in response to an urgent need to revise the administration of the Fourth-Year Thesis in all schools. This development was in fact an unforeseen outcome, beginning as a personal project by a member of the working party on the Fourth-Year Thesis. The Assessment Project’s funding flexibility made it possible to progress it as a core initiative of the faculty’s involvement in the project. While each of these initiatives proved effective in improving the quality of marking and feedback in each of the contexts in which they were undertaken, collectively they have significantly improved the quality of marking and feedback throughout the institution.

326 PART III

ImprovingAssessmentText2Proof.indd 326

11/11/13 3:36 PM

Appreciation of standards-based assessment The increasing use of the new online marking and feedback tools highlighted for many in the university community the value of a standards-based approach to assessment. However, it was evident from the assessment audits undertaken early in the project that many courses failed to define clearly what standards were expected of students in assessment tasks. Little guidance was given to students as to what they would need to demonstrate in order to achieve a pass, a credit, distinction or high distinction, making it difficult for students to plan or monitor their learning. The group consisting of the ADEs, the Director of Learning and Teaching at UNSW and the DVCA took up this issue with the faculties and spirited discussion ensued. Some faculties agreed emphatically that standards-based assessment needed to be articulated in the university’s assessment policy and procedures; some considered it a ‘nice-to-have’ component only; others rejected the concept outright. The most difficult conversations were with the faculty that believed it was already utilising a standards-based approach to assessment when UNSW’s external assessment consultant’s audit report did not agree. Three recommendations in her report pointed to the problem. First: That courses that are graded Satisfactory/Non-satisfactory be reviewed to ensure that it is clear what constitutes a pass or a fail and that students are aware of what is expected of them.

Second: That a bi-annual review of examinations be considered to ensure that a selected sample of examinations addresses the range of graduate attributes and learning outcomes maintaining the acceptable standards of achievement expected by the school

Institutional outcomes of the Assessment Project 327

ImprovingAssessmentText2Proof.indd 327

11/11/13 3:36 PM

and third, that ‘the use of the bell curve [to adjust raw marks] be reviewed’. After considerable negotiation, agreement was reached on the need for standards-based assessment across UNSW. In April 2012, the Vice-Chancellor authorised a new Assessment policy, which had been endorsed by the Academic Board. This policy provided the following definition of standards-based assessment: Standards-based assessment depends on a set of pre-defined statements outlining different levels or standards of achievement in a program, course, or assessment component, and normally expressed in terms of the stated assessment criteria. Standards-based assessment involves the awarding of grades to students to reflect the level of performance (or standard) that they have achieved relative to the pre-defined standards. Students’ grades, therefore, are not determined in relation to the performance of others, nor to a pre-determined distribution of grades.

Furthermore, the policy on grading stipulated that: The grading of students’ performance on assessments will conform to a standards-based approach comprising explicit articulation of what students are required to learn (learning outcomes), what criteria will be used as indicators of their learning, and what the standards are for assessment at different levels of achievement. Grading procedures will be fair and equitable, and will result in the production of grades and reports of students’ learning achievement at course and program levels that are valid, reliable and accurate representations of each student’s capabilities in relation to course and program learning outcomes.

328 PART III

ImprovingAssessmentText2Proof.indd 328

11/11/13 3:36 PM

For some faculties this policy reinforced existing practice or reflected a process of assessment revision they were already undertaking. Despite this clear policy, however, most courses, in most faculties, cannot yet be said to employ standards-based assessment. Institutionalising this change to assessment practice will require ongoing work that is likely to take several years and a number of further cycles of review and revision of assessment practices.

Conversations about assessment Probably the most obvious change in UNSW culture resulting from the Assessment Project is that assessment practice is more widely discussed among the staff or between academics. It is no longer a private matter between individual academics and students. Most faculties made explicit efforts to encourage staff, student and industry-partner conversations about assessment. These ranged from formal meetings to discuss the audit report’s implications for the faculty – through events and workshops focusing on general and specific assessment matters, individual meetings with assessment consultants and educational design teams, and student focus groups – to conversations in learning and teaching discussion forums and informal email and face-to-face sharing of teaching experiences. Faculty-wide, and in several instances inter-faculty, assessment conversations and informal networks are continuing. As the ADE in the Faculty of the Built Environment observed, ‘full and part time staff can and are willing to discuss and debate assessment and have a legitimate platform from which to do so’. This observation could be extended to many faculties across the university. Medicine’s response articulates rather well the elements of the cultural shift felt in their own and other faculties, listing: a focus on scholarly activities to promote a greater understanding of assessment principles and practice ... higher awareness about the need for authentic, sustainable and efficient

Institutional outcomes of the Assessment Project 329

ImprovingAssessmentText2Proof.indd 329

11/11/13 3:36 PM

assessment among academic staff ... greater appreciation of the principles of, and good practices in assessment ... evaluate and further enhance assessment practices.

One of the best examples of a faculty’s assessment culture being opened up to assessment conversations comes from UNSW Canberra. In 2012 and 2013, learning and teaching days were attended by more than half of the UNSW Canberra academic staff. The 2012 day was instrumental in stimulating discussion and reflection on assessment. As well as presentations being delivered (and later made available on the Learning and Teaching Group website), attendees had the opportunity to meet the new Learning and Teaching Group. Staff recognised that these days had helped to make teaching and assessment a legitimate and important topic for collegial discussion. A program of workshops has since been set up to increase staff awareness of the range of assessment tasks possible and the resources available to support innovation and improvement in assessment. The Australian School of Business also credits the project with ‘a greater sharing of practice among staff and across disciplines’ and that ‘these initiatives strengthened staff confidence to experiment with innovative and creative approaches to assessment’. The prioritisation of both teaching and research, as opposed to the previous prioritisation of research alone, has been a marked feature of the change in culture that has taken hold in many faculties. At the same time, it has become clearer that teaching need not be a solitary activity, but can take place within a community of learning, with the active support of learning and teaching committees, the provision of a range of resources and the establishment of venues within which greater sharing of practice can take place among academics. It is clear that not only in ASB have assessment conversations ‘helped to create a culture of commitment rather than compliance in assessment policy and practice’. Staff have also taken the assessment conversation beyond

330 PART III

ImprovingAssessmentText2Proof.indd 330

11/11/13 3:36 PM

UNSW. For example, the ASB’s School of Actuarial Studies successfully negotiated with the professional association, the Institute of Actuaries, to reduce the 70 per cent exam requirement in favour of progressive assessment, to increase the focus on the development of broader graduate capabilities. The improvements to the Moodle Workshop tool, by the faculties of Engineering and Science, to add group work functionality and calibrated peer review, have been the subject of a number of national and international conference presentations and as a result are now to be incorporated into the Moodle core. As the ADE in Science observed: The package [the Moodle Workshop tool development] is also being used as the basis for conference publications, workshop publications, [and] scholarship of teaching and learning publications. It turned out that the package had lots of synergies with other people using Moodle in other faculties and disciplines globally … It is actually giving UNSW quite a strong Moodle visibility across the sector.

More extensive use of technology One of the key benefits of the Assessment Project was its role in alerting staff to the contributions that technology can play in improving both the quality and the efficiency of assessment practice. The use of technology to improve efficiency took many staff by surprise. They had not realised, for example, the possibilities an LMS or an online marking tool offered for the automation of assessment processes. They did not know that multiple-choice questions could be designed not only to test knowledge retention but also to assess higher-order reasoning. The idea of setting up a multiple-choice quiz online with automated feedback, so that the entire assessment could be conducted without their further intervention, was entirely new to many. The Faculty of Medicine did just this, replacing the less reliable viva component of their

Institutional outcomes of the Assessment Project 331

ImprovingAssessmentText2Proof.indd 331

11/11/13 3:36 PM

undergraduate integrated clinical examination with a new online multiple-choice examination, and reported significantly improved efficiency with this component of assessment. The project was also instrumental in: ʶʶ encouraging staff at all levels to explore the role technology can play in the creation of particular types of assessment tasks (several faculties, for example, moved on from antiquated and inflexible bespoke online tools to take up Moodle with its increasing array of assessment plug-ins) ʶʶ appreciating how technology could be used to administer those tasks in an effective and efficient way (the widespread uptake of ReView and GradeMark reflects this appreciation) ʶʶ encouraging the use of technology to engage students in the review and assessment of their own and their peers’ work (Science and Engineering’s integration of calibrated peer review into the Moodle Workshop tool is the notable example of this) ʶʶ prompting academics to provide students with an increased volume of specific feedback with regard to their learning outcomes, in an automated way, without adding enormously to the staff workload (rubric development and implementation through online tools in many faculties demonstrates this, as does the use of MCQs with automated feedback) ʶʶ increasing staff awareness of how technology could be used in the moderation of assessment design processes, in the moderation of student or peer reviewed assessments and in the moderation of examiners’ assessments (such as the use of iPads in observable assessments in Medicine and the enthusiastic response in some faculties to the provision of targeted workshops on technology in assessment) ʶʶ suggesting technologically-driven ways in which to streamline the calculation of final marks and grades (e.g. in the use of the Gradebooks in Moodle).

332 PART III

ImprovingAssessmentText2Proof.indd 332

11/11/13 3:36 PM

It is not as if technology was not already being widely used for assessment at the university. In the last decade, UNSW staff have been very active in the development of online assessment tools. But, by and large, they have worked as individuals or small groups within a course, program, school or faculty, supported by small research and development grants and mostly the benefits of their work have only been felt at that local level, rather than throughout the university. The Assessment Project was instrumental in developing inter-faculty awareness of these developments, prompting broader appreciation of their benefits and wider uptake of the tools concerned. Technological guidance

In response to a need throughout the university for guidance in the selection of learning and teaching (and particularly assessment) technologies, the LTU, in parallel with the Assessment Project, developed and implemented a Research Evaluation and Development Framework (the RED Framework) and a Research Evaluation and Development Platform (the RED Platform) for technology enabled learning and teaching. The RED Framework was designed to provide all university staff with advice, guidance, tools and support in developing and assessing the potential of new learning and teaching technologies. The RED Platform provided a limited IT platform where staff could develop and trial new applications and systems. The RED framework and platform allow the university to systematically review and share information about what new educational technologies can do. Importantly for the Assessment Project, they allowed faculties to review and compare notes on the many different assessment tools they were considering, with a view to making their assessment practices more efficient and effective.

Institutional outcomes of the Assessment Project 333

ImprovingAssessmentText2Proof.indd 333

11/11/13 3:36 PM

Assessment resources for teachers Early in the project it became evident that the university community had very little access to information on efficient and effective assessment practice. There was a great quantity of scholarly literature, but time-pressed academics could not efficiently glean the salient or critical issues that would help them in their decision making and practice in relation to assessment. A major focus of the work of the LTU was to develop a wide range of just-in-time resources that academics could use to support their practice. These resources were principally made available on a revised Teaching Gateway website, central to which were the web pages collectively defined as the Assessment Toolkit. These pages contained general information on the nature and process of assessment, discussions of particular assessment approaches and examples of tasks aligned to each of these approaches. They also addressed specific issues in relation to assessment, such as the design of effective assessment tasks, authentic assessment, the development of appropriate indicators and standards for assessment, the achievement of specified learning outcomes, and mechanisms for the provision of specific feedback to students – each with case studies of how these issues have been dealt with in different contexts. As a result of the project, UNSW as an institution can claim to offer staff better resources to support their assessment practice than were previously available.

Improved central leadership of learning and teaching The Assessment Project was also the catalyst for an instance of significant organisational change in order to offer staff a better Learning and Teaching Unit. The demand the project created for expert knowledge, leadership and advice on assessment practices revealed a major lack of such expertise within the LTU. At the outset of the Assessment Project, the LTU did not have

334 PART III

ImprovingAssessmentText2Proof.indd 334

11/11/13 3:36 PM

the capacity or capability to provide the level of leadership and scholarly input into the work that faculties needed and wanted to undertake in relation to assessment. This was unsurprising, given that the LTU had been created through the amalgamation of a relatively large educational technology support unit and a very small academic development unit, both of which were largely comprised of staff with interests and expertise in other aspects of learning and teaching. A major outcome of the project was that the university committed itself to restructuring, refocusing and strengthening the capacity and capability of the LTU so that it could provide the level of expert leadership and support necessary for learning and teaching development in general, and improvement in curriculum, pedagogy and assessment in particular. In February 2012 the university formally announced its intention to undertake a significant process of workplace change within the LTU, specifically designed to address these issues. In May 2012, 33 of the 39 positions within the LTU were spilled, the staff being offered the option of either applying for one or more of the positions in the new structure or taking voluntary redundancy. In the succeeding 12 months, this process led to a ‘rebooting’ of the LTU with significantly greater numbers of staff with the knowledge, skills, experience and commitment to leading and supporting the enhancement of all aspects of learning and teaching, including assessment.

Dedicated faculty learning and teaching staff In a number of faculties and schools, a growing recognition of the need for ongoing local leadership and management of learning and teaching development was born of staff efforts in the Assessment Project. The requirements of the project and the presence of the external assessment consultant and central project staff alerted faculties to the benefits of ongoing access to learning and teaching

Institutional outcomes of the Assessment Project 335

ImprovingAssessmentText2Proof.indd 335

11/11/13 3:36 PM

teams, fellows and advisers. Some reported a resultant reinvigoration of existing learning and teaching committees; others established committees or positions to address their lack of such personnel. For example, a number of faculties have now included a Director of Learning and Teaching within their continuing staffing arrangements to work with the ADE in support of their faculty’s learning and teaching enhancement agenda. UNSW Canberra, recognising that it had no suitable forum for the exchange of ideas and information among school leaders on learning and teaching issues, has established a Learning and Teaching Advisory Committee. It has also restructured the existing role of Coordinator, Learning and Teaching Development (CLTD) to function in a truly cross-disciplinary manner.

The wider impact of the Assessment Project Each faculty within the university responded slightly differently to the challenges presented by the project, and the range of individuals involved in project activities in each faculty varied enormously. As a result, the effect on teaching staff throughout the institution has been variable and dependent on the faculty to which they belong. In many faculties, direct involvement with the project was limited to those individuals involved in schools or programs the faculty strategically chose as the sites for assessment review and revision. Faculty staff beyond these schools, programs or courses, may have had little engagement with the project. It depended on the extent to which faculties communicated and disseminated the project’s intent, processes and outcomes, and the extent to which faculties chose to institutionalise any changes or improvements made to assessment practice in their overall policies, procedures and practices. For example, in the ASB, the ADE made every effort to ensure that all the Heads of Schools within the faculty were briefed and kept aware of the Assessment Project, informed that they were all

336 PART III

ImprovingAssessmentText2Proof.indd 336

11/11/13 3:36 PM

expected to respond in particular ways, and made aware of the project’s progress in various parts of the faculty. As a result, both staff awareness of and engagement in project activities were comparatively high. By contrast, in the Faculty of Law a significant proportion of the assessment review and revision work was undertaken by the ADE and a small team of staff. It was more efficient, they said, to have dedicated project staff than to second existing faculty staff to project work. Unfortunately, this approach is reported to have led to ‘a siloing effect of much of the knowledge gained’, and with the loss of the staff at the end of the project, all the faculty will be left with are the team’s reports and updates. They suggest that ‘establishing a steering group of academic staff who could have reviewed progress regularly might have been a way of ensuring a broader and deeper awareness of the project’s work’.

The long-term impact of the Assessment Project The influence of the Assessment Project is widely acknowledged to be ongoing, with all faculties indicating by their actions that they are motivated to pursue project-related goals beyond the project. As the project concluded, faculties were already articulating and establishing agendas for continuing improvement of various aspects of assessment, indicating the shift in focus and cultural change that has taken place. While many of the Assessment Project’s achievements in terms of improvements to the effectiveness and efficiency of assessment are readily evident, even at the conclusion of the project’s three years, in terms of innovation in assessment practice, it may take more time for the project’s effects to be clear. New programs and courses require time to prove themselves, often needing adjustment to ensure that they are achieving the envisaged goals, and cultural change is also a complex and lengthy process.

Institutional outcomes of the Assessment Project 337

ImprovingAssessmentText2Proof.indd 337

11/11/13 3:36 PM

Conclusion As a result of the Assessment Project, much has been done to review and revise assessment practices throughout the university as a whole. There is widespread variation in what was done, how much action was taken and what was achieved. Despite the variation, however, there is evidence of both improved efficiency and improved effectiveness or quality of assessment practice in various parts of the university. Some faculties have managed to achieve gains in both efficiency and effectiveness. But in many cases the benefits have been restricted to particular programs or schools; the process of implementing whole-of-institution change is by no means complete. It can be said at this stage, though, that most faculties and schools have initiated substantial positive changes in assessment practice, their wider learning and teaching practice and their teaching culture, and look set to sustain these into the future.

338 PART III

ImprovingAssessmentText2Proof.indd 338

11/11/13 3:36 PM

15

Lessons learnt about whole-of-institution educational change Stephen Marshall, Prem Ramburuth and Richard Henry

Having summarised the Assessment Project outcomes at both faculty and institutional levels, we now turn to an even broader question: What lessons did the university learn about whole-of-institutional change from undertaking the Assessment Project? This chapter details and reflects on the major lessons we can take from this experience, speaking to each of the issues discussed in Chapter 3 ‘The UNSW approach to improving assessment’. We conclude by summarising the lessons the project delivered about the leadership and management of educational change and offering some advice to others who may be considering a whole-of-institution approach to educational change in their own institution.

Assumptions The assumptions identified at the outset of the project in relation to what a project aimed at improving the efficiency and effectiveness of assessment practices might involve were largely proven to be correct. Improving assessment in many faculties did involve: ʶʶ significant program and course redesign (indeed, redesigning programs and courses often became the strategic focus of

339

ImprovingAssessmentText2Proof.indd 339

11/11/13 3:36 PM

ʶʶ

ʶʶ

ʶʶ

ʶʶ

project activities, as many faculties were not ready to look specifically at assessment issues without reconceptualising the role of assessment tasks within the overall curriculum) a considerable amount of revision of existing assessment tasks and development of new assessment tasks and resources (in some cases this was related to a faculty’s moving away from summative mid-semester examinations to more formative assessment tasks involving improved feedback to guide student learning) the development and/or deployment of new technologies to support various aspects of assessment (including technologies to support group work, self and peer review, marking, grading and the provision of targeted feedback) the development of a new policy and procedural framework for assessment (recognition that clarity as to the criteria and standards that are used to assess their work was critical in students’ decision making as to how to plan and go about their learning, led to an institutional decision to adopt a standards-based approach to assessment and therefore the need to change the university’s assessment policies and procedures to reflect such an approach) the provision of ongoing professional development for staff (indeed, different types of professional development for different staff, depending upon their role and responsibilities in relation to the project and everyday assessment practice, was required).

It also required strong leadership at institutional, faculty/school and program/course levels.

Theories of and approaches to change For a number of reasons, outlined in Chapter 3, the university set out to adopt a whole-of-institution, multi-year, multi-focal,

340 PART III

ImprovingAssessmentText2Proof.indd 340

11/11/13 3:36 PM

evidence-based and contingent approach to the improvement of efficiency and effectiveness of assessment. Each of these characteristics proved to be important to our efforts. In most cases the importance was for the reasons articulated in Chapter 3 but in some cases for reasons beyond those initially flagged. A whole-of-institution approach

The appropriateness of adopting a whole-of-institution approach was confirmed in many ways during the project. However, two that are worthy of mention relate to the competitive nature of the relationships between faculties, and the importance of strong collaboration between faculties to ensure widespread dissemination and impact of new assessment practices. The perceptions that each faculty developed in relation to what other faculties were doing and how they were doing it, were crucial in motivating faculties to engage in and pursue particular approaches to assessment. Likewise, the opportunity to collaborate with and learn from each other proved to be a powerful motivator in determining faculties’ responses to the challenges of the Assessment Project. The widespread adoption and use of ReView, GradeMark and tools within Moodle to mark students’ work, moderate grades and provide improved feedback is a clear manifestation of faculties’ competitive yet collegial approach. Without the whole-of-institution approach that led to different approaches to improving assessment practices being developed and implemented in different parts of the university at the same time, the scope and scale of the outcomes achieved would more likely have been limited to particular faculties and a smaller number of interventions. A multi-year approach

A multi-year approach also proved to be essential, not only for the reasons articulated in Chapter 3, but also to provide sufficient time for the university to resolve the full range of organisational, administrative and technological issues associated with improving

Lessons learnt about whole-of-institution educational change 341

ImprovingAssessmentText2Proof.indd 341

11/11/13 3:36 PM

the efficiency and effectiveness of assessment. As described previously, many of the strategies that faculties chose to implement involved the development and/or deployment of new technologies. However, the need to upgrade the technology infrastructure to support assessment – which involves upgrading or replacing existing technologies and their integration with existing enterprise-level academic and administrative systems (e.g. the university’s learning management systems and student and academic administration systems) – arose at a time when the university was implementing a significant upgrade to its existing student and academic administration systems. The budget, work packages and schedules of that project placed significant parameters around when the technologies to support assessment could be upgraded or replaced, and given that the cost of upgrading such systems runs to the tens of millions of dollars, the timeframes associated with the planning, approval and implementation of such upgrades, by necessity, extended beyond a single annual budget cycle. A multi-focal approach

The need for a multi-focal approach was also supported by our experience of UNSW’s Assessment Project, with most faculties’ responses to the project’s challenges involving combinations of curriculum, staff and organisational development. However, what has become even more apparent in hindsight, is that the strategies that needed to be deployed to effect change and/or development in each of these areas also require a multi-focal approach. In all cases, a range of structural, human, political and cultural issues needed to be resolved in order to realise the desired changes or developments. The development and deployment of FASS’s Online Assessment Tool clearly illustrated this need. Not only did this initiative require development and deployment of the tool and the staff development necessary to assist them to understand its purpose and how to use it in the planning of their course level assessment strategies, it also required the resolution of a range of issues concerning:

342 PART III

ImprovingAssessmentText2Proof.indd 342

11/11/13 3:36 PM

ʶʶ the structural arrangements within the faculty necessary to develop and deploy the tool ʶʶ the knowledge, skills and capabilities required by staff to develop, deploy and use the tool ʶʶ the politics of requiring staff to use the tool and to submit their proposed assessment strategies for approval ʶʶ the cultural clash between a traditional model of ‘private practice’ when it comes to the assessment of student learning and a new, open, transparent and public model of engaging in assessment of learning. An evidence-based approach

The need for an evidence-based approach to planning and implementing changes to improve the efficiency and effectiveness of assessment was clear. The availability of real, local data about current assessment practices was essential and proved to be invaluable in all faculties in encouraging staff to engage with the project. As a research-intensive institution, it was critically important that our strategies be informed by evidence, both of our current practices and of good practice as described in the scholarly literature and practiced elsewhere. The time taken to develop and deploy audit tools that would provide the evidence that faculty leaders needed to make current practice transparent was critical to the success of the project. The availability of this faculty-, school- and program-level data meant that when faculty leaders encountered resistance to change, they were able to provide direct evidence of the need for change. Often, exposing staff to these data was all that was necessary to secure their engagement. Indeed, many were surprised to find that there was a strong evidence base to support many of the changes proposed. A contingency approach

There is no doubt that the context within which improvement in assessment practice was attempted significantly affected the

Lessons learnt about whole-of-institution educational change 343

ImprovingAssessmentText2Proof.indd 343

11/11/13 3:36 PM

approach taken. Indeed, as we anticipated, faculties not only had different agendas and priorities for change in relation to assessment, they also had different capacities and capabilities for change, which largely determined the parameters within which their responses to the challenges of the Assessment Project could be developed and implemented. It was clear from very early on in the project that not all faculties were determining their response to the Assessment Project from the same starting point. Indeed, due to the requirements of professional accreditation bodies, some faculties (in particular, the Faculty of Medicine and the ASB) had a long history of reviewing and revising their assessment practices as part of regular accreditation processes. As a result, they had well-established business processes and organisational infrastructures to support such activities. The ASB, for example, has over time, developed and maintained its Educational Development Unit to assist its schools and staff with the challenges associated with reviewing and revising different aspects of programs and courses, including assessment. Unfortunately, other faculties had no such history or experience. For these faculties the process was not only novel and in some ways counter cultural, but required considerably more organisational change and development for them to be able to fully participate. Allowing faculties to define their own responses to the challenges of the Assessment Project, within the scope of those constraints, proved to be particularly important.

Policy instruments In determining the university’s approach to facilitating and supporting the Assessment Project, the Director of Learning and Teaching (DLT) and the Deputy Vice-Chancellor (Academic) (DVCA) agreed that five different policy instruments would be used: mandates, inducements, dissemination of information, capacity building strategies, and systems changing strategies. Our experience of

344 PART III

ImprovingAssessmentText2Proof.indd 344

11/11/13 3:36 PM

the project supports this decision. However, some questions arose relating to each of the instruments. Mandates

The use of mandates, in the form of institutional goals and key performance targets (KPTs) for senior and middle managers directly relating to the outcomes of the project, was essential in focusing attention on the need to engage with the project. Until the Vice-Chancellor and President included a KPT related to the Assessment Project in the annual KPTs of each of the Deans, and held follow-up conversations with each Dean as to how they were progressing with realising this KPT, relatively little faculty-level engagement had been achieved beyond the ADEs and faculty personnel who had been allocated, or had assumed, responsibilities for the Assessment Project. However, once the Deans were seen to be showing a strong interest in the project’s outcomes and other faculties’ progress towards them, Heads of Schools and faculty staff more broadly, became considerably more engaged. The ADEs, almost universally, saw the inclusion of achievement against the goals of the Assessment Project in the Deans’ KPTs as critically important in providing them and their colleagues with the authority to engage staff and to ensure change. As indicated earlier, ADEs at UNSW generally have no direct authority to deploy resources to effect change within their faculties, as this authority is vested in Heads of School. Thus, they must exercise leadership and effect change via influence on a range of key stakeholders both within and beyond their faculties. Having the implicit authority of the Vice-Chancellor and their Dean to require school staff to review and revise their assessment practices made their role easier. Inducements

The availability of strategic funding to support faculties in the work they needed to do for the Assessment Project also proved to be critical. Without it, many faculties have indicated that they

Lessons learnt about whole-of-institution educational change 345

ImprovingAssessmentText2Proof.indd 345

11/11/13 3:36 PM

would have had difficulty in deploying the levels of resources that they needed to appropriately engage in the project. In many cases these funds enabled faculties and/or schools to supplement their usual staffing arrangements with individuals with the subject matter expertise necessary to provide the leadership, guidance, and/or support to plan, develop, implement and/or evaluate their response to the Assessment Project. The funds also enabled faculties and schools to: ʶʶ develop or secure the resources and/or equipment needed to develop and deploy their response ʶʶ secure the services of external assessment experts from time to time ʶʶ develop and deliver the training and professional development opportunities required by staff to enable them to effectively engage in the development and deployment of their response. The fact that expenditure of these funds was monitored through the faculty review of learning and teaching process and that receipt of additional funding was contingent on satisfactory expenditure of the initial funds provided in support of activities associated with the Assessment Project, were key elements in the success of those inducements. Dissemination

As indicated earlier, dissemination of information regarding the nature of assessment and of the Assessment Project itself proved to be critically important. While centrally developed and delivered workshops and forums were important, faculty-based dissemination activities were even more critical to the project’s success. In some faculties (e.g. the ASB) where regular meetings, briefings, forums and other opportunities were created for staff to engage with each other, and local assessment data and resources to support the review and revision of assessment was made available, much higher levels of staff engagement in Assessment Project activities

346 PART III

ImprovingAssessmentText2Proof.indd 346

11/11/13 3:36 PM

were achieved. In the Faculty of Law, where such opportunities were limited in scope and number, the level of direct staff engagement in the faculty’s response to the Assessment Project was relatively low. The significance of effective local dissemination strategies in engaging staff in the process of effecting educational change is further highlighted by the fact that at the conclusion of the three years formally designated to the project, there are still staff in some faculties who are learning about the Assessment Project for the first time. Capacity building

As indicated in Chapter 3, a number of areas of institutional capacity development were anticipated to emerge during the Assessment Project. For example, it was clear that the university would need to further develop is governance processes for monitoring and evaluating the quality of assessment practice throughout the institution. There was also considerable evidence to suggest that the university’s IT infrastructure was inadequate to support the design, development and delivery of high quality, efficient and effective assessment. Consequently, it was no surprise when it became evident during the Assessment Project that these matters needed attention. What was a considerable surprise, however, was to learn the extent to which the university needed to develop its capacity for the leadership and management of assessment in particular, but learning and teaching in general. To date, much of the responsibility for the development of learning and teaching, including the development of assessment practices, has lain with individual academics. The Assessment Project revealed, however, that the university has too few staff with scholarly expertise in curriculum, pedagogy and assessment to provide the academic leadership required to effectively review and improve assessment practices on a regular basis at the faculty, school, program and course levels. The project also brought to light the fact that the role of the ADE is largely a conflicted one. Indeed, ADEs almost universally

Lessons learnt about whole-of-institution educational change 347

ImprovingAssessmentText2Proof.indd 347

11/11/13 3:36 PM

reported that their roles are overwhelming, encompassing in most cases responsibility for all matters related to program development and delivery; to quality assurance and improvement of learning and teaching; academic staff development as it relates to learning and teaching; as well as the management of their faculty’s organisational, administrative, IT and physical infrastructure for learning and teaching. Further, when faculties have been expected to respond to new institutional priorities, as in the case of the Assessment Project, it has generally been expected that the ADE in each faculty will lead, manage and report on their faculty’s response, despite the fact that they may already be leading and managing a major restructure of the faculty’s academic programs in response to new institutional rules for the structure of undergraduate or postgraduate programs. ADEs are also responsible for collecting evidence to prepare their faculty’s annual learning and teaching performance report. This conflation of governance, management, administrative and operational delivery responsibilities within the single role of ADE is neither effective, efficient nor sustainable. The Assessment Project also revealed that beyond the role of the ADE, few faculties or schools have clearly defined positions with specific responsibilities to support senior leaders and managers with the development, delivery and assurance of quality learning, teaching and curriculums. Furthermore, in most faculties the business and administrative processes and arrangements associated with learning and teaching development and assurance are significantly underdeveloped. The Assessment Project has not only revealed that UNSW has much work to do to develop sustainable institutional capacity for effective governance, leadership, management and support for learning and teaching developments, but also that such capacity is critical to the ongoing success of such projects. The project has revealed the need to ensure that position descriptions for ADEs, Heads of Schools, program coordinators, course convenors and

348 PART III

ImprovingAssessmentText2Proof.indd 348

11/11/13 3:36 PM

others, specifically articulate the roles and responsibilities expected of those who occupy such positions, in relation to educational innovation and change. One of the most significant lessons the project delivered relates to the data that are routinely kept throughout the institution about assessment practices. Most of these data are collected at the time of program development or renewal, but little is collected in a routine way that would enable monitoring of assessment practice in the university. This indicated the necessity for significant business process change and changes to the IT systems that enable program and course administration to ensure that these data are collected. The absence of such data and such systems prevents faculties from routinely reflecting on their assessment practices and it makes an independent, stand-alone assessment audit process necessary as part of such a project. Systems changing

Efforts to engage more staff in defining, monitoring, reviewing and revising assessment practice had a significant effect on the overall priority and level of staff engagement with learning and teaching issues in general and assessment issues in particular. When the role of staff in the review and critique of assessment was legitimised – that is, when their practices were made more public – not only were faculty and school program-based conversations about assessment made more transparent but they also provided a much stronger basis for critical review and development. A further significant lesson from the Assessment Project relates to the establishment of formal bodies and processes with the responsibility for trialling, reviewing and making decisions regarding investment in technologies to support assessment. The creation of these new arrangements moved decision-making responsibility as to which technologies should be invested in and used as part of assessment, from central support units such as IT and the LTU to the faculties and the broader academic community.

Lessons learnt about whole-of-institution educational change 349

ImprovingAssessmentText2Proof.indd 349

11/11/13 3:36 PM

Without the Assessment Project, individual academic staff would have continued to develop their own unique IT solutions to support their assessment practices, without the possibility that those initiatives that proved to be effective could be scaled and integrated into the university’s enterprise level learning and teaching systems and thus be made available to all UNSW staff. The project enabled a much more coordinated, evidence-based and transparent approach to IT investment decisions. Distributed capacity for leadership and management of educational change

Not only did the Assessment Project highlight the essential need for greater institutional capacity and capability for academic leadership in relation to learning and teaching in general and assessment in particular, but also it made it clear that such capability needed to be distributed widely throughout the institution. The institution’s capacity to motivate, inspire and engage staff from all faculties and schools in the activities of the Assessment Project was heavily dependent on the capacities and capabilities of local staff, academic and professional, occupying and not occupying positions of formal management responsibility in relation to learning, teaching and assessment, and those in the central service units responsible for supporting these activities. In both local (faculty and school) and central (LTU) contexts, however, the capacities and capabilities of staff were found to be insufficient.

The importance of external expert consultants This lack of the necessary expertise made clear the importance of having access to, and where necessary engaging, external expert consultants who could supplement and/or complement local expertise as required. However, it also made clear that institutions, faculties and schools cannot rely on this approach if their intention is to build a sustainable capacity for ongoing, high quality educational change and improvement. Building the capabilities of local

350 PART III

ImprovingAssessmentText2Proof.indd 350

11/11/13 3:36 PM

institutional, faculty and school staff to be effective leaders and managers of educational change is fundamental.

Developing academic leadership and management capacity and capability Fortunately, the Assessment Project also provided insights into the types of knowledge, skills and capabilities needed by leaders and managers of educational change that we and others might use to assess and improve the strength of our current academic leadership development programs and arrangements. These fell in five broad areas. 1 Academic leadership: What it is and how it relates to academic management? What distinguishes academic leadership of learning and teaching from academic management of learning and teaching? How are academic leadership and management different from academic administration? 2 The nature and process of educational change and the strategies that leaders and managers can use to engage stakeholders in defining, implementing and institutionalising educational change. 3 Curriculum and pedagogical design, development, implementation and evaluation: The multiple dimensions of curriculums (including the objective, structural and tangible dimensions such as program and course outlines, learning resources, activities and environments, formative and summative assessment or evaluation tasks and strategies, and the subjective and intangible dimensions such as the underlying assumptions, values, beliefs and meaning inherent in these curriculum artefacts). 4 Staff development: How should we support the development of staff capacities and capabilities for educational change? How should we identify staff development needs? How should we support staff to develop the knowledge, skills and capabilities necessary to be leaders of learning and teaching?

Lessons learnt about whole-of-institution educational change 351

ImprovingAssessmentText2Proof.indd 351

11/11/13 3:36 PM

5 Organisational development for learning and teaching. How do institutions enable the development and delivery of learning and teaching? How do business models, organisational arrangements, administrative processes, policy frameworks, reward and recognition systems, and physical and IT infrastructures affect learning and teaching? How can they be developed to ensure quality learning processes, outcomes and experiences? ADEs, Heads of School, and Learning and Teaching Fellows from all faculties identified these as areas of practice in which they were either engaged, or in relation to which they felt in need of support to effectively fulfil their responsibilities in relation to the Assessment Project. However, it is worth noting that many expressed a reluctance to formally pursue such development because they perceived it as ‘time-consuming’ and ‘not a high university priority’. Indeed, some considered it ‘career death’ to put too much time into developing themselves as teachers, or as leaders of learning and teaching, when the university, and in particular their Deans and Heads of School, appeared to value the effort they put into their development as researchers considerably more. Such reactions raised further questions as to how effectively the university had been encouraging, recognising and rewarding engagement in and commitment to educational innovation in parallel with research innovation and excellence.

Conditions for ongoing, sustainable engagement in educational innovation Based on our experience of the process and outcomes of the Assessment Project, it has become clear that there are a number of important things that institutional leaders and managers must do if they are to successfully develop and maintain a culture that encourages and enables ongoing, sustainable educational innovation through-

352 PART III

ImprovingAssessmentText2Proof.indd 352

11/11/13 3:36 PM

out their institution. In broad terms these might be described as exerting pressure for innovation, providing support for innovation, and making meaning of the importance of ongoing educational innovation. Exerting pressure for educational innovation

Pressure for innovation can be exercised in many ways, both directly and indirectly. The inclusion of goals and outcomes that specifically target educational innovation in the KPTs of leaders and managers at all levels of the institution (VC, DVCA, DLT, Deans, ADEs, HOS, Program Directors and Course Convenors) coupled with ongoing conversations about progress towards the achievement of these goals as part of regular performance review processes throughout any year is probably one of the most direct means of exerting pressure for engagement in educational innovation. Another might involve including a requirement for faculties and schools to review and report on their efforts to assure and improve the quality of different aspects of learning, teaching and curriculums in their programs and courses on a regular basis, perhaps as part of academic program and course review. Establishing performance and policy frameworks of this type, with clear associated mechanisms for monitoring achievement and compliance, were found to be particularly effective during the Assessment Project. Establishing goals and expectations without implementing the mechanisms for monitoring achievement and compliance, however, was clearly less effective. Indeed, it was only after it became clear to the Deans that the VC was expecting them to report upon their progress and achievements in respect to the Assessment Project and the ADEs realised, likewise, that the DVCA was interested in and expecting them to regularly report in the same way, that the levels of faculty engagement in the Assessment Project significantly increased. Regular meetings between the DVCA, DLT and faculty ADEs to discuss progress and issues associated with the Assessment

Lessons learnt about whole-of-institution educational change 353

ImprovingAssessmentText2Proof.indd 353

11/11/13 3:36 PM

Project not only served to exert further indirect pressure for faculty engagement with the Assessment Project’s goals and activities (due, as described earlier, to the collaborative but competitive spirit that existed between them) but also provided an effective means by which the university could determine the best ways to support faculties, schools and staff to engage in the project. Providing support for educational innovation

Some of the ways in which institutional leaders and managers actively supported the Assessment Project were: ʶʶ by remaining engaged with those ‘at the front line’ to identify those factors that might be inhibiting progress or engagement and taking active steps to collaboratively identify and develop solutions to address them ʶʶ by advocating for the changes or developments that needed to occur in the university’s broader business processes and infrastructure to ensure that these solutions could be effectively and sustainably implemented ʶʶ by realigning the focuses of the university’s grants, awards and recognition processes with the goals and desired outcomes of the Assessment Project ʶʶ by re-structuring or reorganising roles and responsibilities within the university’s central services (IT, LTU) to ensure that staff were appropriately deployed to support institutional and faculty engagement in the Assessment Project ʶʶ by reprioritising and redistributing funds within the university’s capital, operational and strategic budgets to ensure sufficient resources to establish and maintain the critical organisational, administrative, technological and physical infrastructures necessary to effectively achieve and sustain the desired changes. While the provision of support at the institutional level was important, one of the key lessons of UNSW’s Assessment Project

354 PART III

ImprovingAssessmentText2Proof.indd 354

11/11/13 3:36 PM

was that visible and active engagement of and support by faculty and school leaders (Deans, Deputy Deans, ADEs and HOS) was critical to the success of the project. For example, in those faculties or schools where the Dean was actively engaged in championing the Assessment Project and where they: ʶʶ provided regular opportunities for different groups of faculty staff to meet, network, discuss, share, learn, encourage, and support each other in their respective endeavours, regardless of the roles they may have assumed or been assigned in the project ʶʶ deployed the faculty’s own resources to support its Assessment Project activities and further develop the faculty’s organisational, administrative, technological and/or physical infrastructures to enable and support the sustainability of the processes and outcomes of the project, and ʶʶ realigned the focuses of the faculty’s grants, awards, recognition and performance management processes with the goals and desired outcomes of the Assessment Project, considerably greater levels of engagement and achievement were reported.

Conclusion UNSW had very clear intentions when it established the Assessment Project. The main goal was ‘to address academic workload by improving the efficiency and effectiveness of assessment’ throughout the university. Did we realise this goal? Based on the evidence that we have available to us, we believe that the answer to this question is yes. Did we achieve a uniform outcome throughout the university? It is clear that we did not. In some faculties we achieved a lot more than others. In many faculties we achieved more in some schools than in others. In some faculties and schools we achieved significant improvements in efficiency, both in terms of reductions in the costs associated with past

Lessons learnt about whole-of-institution educational change 355

ImprovingAssessmentText2Proof.indd 355

11/11/13 3:36 PM

assessment practices – particularly in relation to mid- and endof-year examinations – and in terms of the time that academics needed to devote to marking, grading, moderation and the provision of feedback to students. In others we achieved significant improvements in the quality of our assessment practices. In yet others we achieved improvements in effectiveness. However, as a result of our efforts over the last three years, we can confidently say that the university has indeed improved: ʶʶ the efficiency of our assessment practices ʶʶ the effectiveness of our assessment practices (by improving the alignment of assessment tasks with program and course learning outcomes; through the adoption of clearer, more transparent criteria and standards for assessment; by improving the quality, frequency and/or specificity of feedback to students) ʶʶ the knowledge, skills and capabilities of significant number of staff from all faculties in the areas of assessment design, development, implementation and evaluation ʶʶ the number and quality of the university’s resources in support of the design, development, implementation and evaluation of high quality, efficient and effective assessment. Furthermore, through: ʶʶ the realignment of the policy and procedural frameworks around assessment, academic program review and reward and recognition of teaching excellence ʶʶ the building of the institution’s capacity and capability to more efficiently and effectively identify and respond to the university’s need to upgrade and develop its central IT systems in support of learning and teaching in general and assessment in particular, and ʶʶ a substantial piece of workplace change within the Learning and Teaching Unit to develop an entirely new central support unit in support of learning and teaching development,

356 PART III

ImprovingAssessmentText2Proof.indd 356

11/11/13 3:36 PM

we can also confidently say that we have built the university’s capacity to ensure that ongoing review and revision of assessment practices is a sustainable and central activity in the future. To what extent these changes have affected academic workload remains unclear. While we have strong empirical evidence of savings of up to 30 per cent of academics’ time spent in marking in particular courses, it is unclear as to how widespread such savings have been. As to the question of whether we have been able to realise these savings in dollar terms, the answer is no. In general, where time savings have been achieved, they have been absorbed by subsequent adjustments in the academics’ workload. Was our whole-of-institution approach to educational change appropriate? We strongly believe that it was. Without it, we could not possibly have achieved the breadth of development and improvement described here. Is it an approach we would use again or advocate to others? Yes. But in doing so we would offer the following advice: 1 Whole-of-institution educational change is ambiguous, scary and often frustrating: What the change ‘is’ is only known in the broadest of terms at the outset. It means different things to different individuals and groups. It represents a challenge (to various degrees) for most. The uncertainty of what is required can lead to a lack of engagement. Critical to the effective leadership and management of whole-of-institution educational change is ongoing engagement of stakeholders from throughout the institution in the definition of the change and the change process. 2 Whole-of-institution educational change is complex: Change must occur in a variety of settings, in a range of different areas of educational practice, and through the actions of a variety of people with different values, beliefs and interests. Leaders and managers of WIEC need to use multiple educational and organisational lenses to identify and address the issues that will need to be addressed.

Lessons learnt about whole-of-institution educational change 357

ImprovingAssessmentText2Proof.indd 357

11/11/13 3:36 PM

3 Staff engagement with whole-of-institution educational change depends on the nature of the proposed change: Specifically, the need for the change, the quality of the change, the complexity of the change, and the scope of the change. (Fullan (2003) has previously identified these characteristics as important in determining educators’ engagement with change.) Critical to effective leadership and management of whole-of-institution educational change then are: the availability of data, the availability of expertise, and the availability of quality resources. 4 Whole-of-institution educational change requires multiple points of intervention: It requires leaders and managers to engage stakeholders in defining and implementing changes to: »» curriculums (the content and design of the educational experience) »» teaching (the roles and contributions teachers make to student learning) »» learning (the strategies and processes students use to realise defined learning outcomes) »» organisational enablers for learning and teaching (whether physical, technological or administrative) »» Critical to effective implementation of whole-ofinstitution educational change are a set of principles to describe both the process and the desired outcomes of the change, which can be used to ensure all efforts associated with these multiple points of intervention are coherent and aimed at achieving the same desired set of outcomes and a plan – not a ‘detailed circuit diagram’ but a ‘mudmap’ to provide general direction. 5 Whole-of-institution educational change requires leaders and managers to engage stakeholders in defining and implementing changes at multiple organisational levels: institutional, faculty, school, program and course levels. Critical to effective wholeof-institution educational change are:

358 PART III

ImprovingAssessmentText2Proof.indd 358

11/11/13 3:36 PM

»» leaders or advocates with the status as well as the knowledge and expertise necessary to engage stakeholders in the critical debates and definitional processes required by the change »» governance structures with the expertise to guide and oversee the change – where possible these should be the existing structures »» management structures to coordinate change activities in multiple domains and across multiple levels. 6 Whole-of-institution educational change requires time, patience, and persistence. Everything takes longer than you initially plan so it is important to be flexible.

References Fullan, MG (2003) The New Meaning of Educational Change, 3rd edn, Teachers College Press, New York.

Lessons learnt about whole-of-institution educational change 359

ImprovingAssessmentText2Proof.indd 359

11/11/13 3:36 PM

Index Page numbers in italics refer to figures.

256, 291–93 methods of 86, 87 program level approach to improving AACSB 16, 100, 116, 266, 283 294–96 Academic Boards resources and databases 300, 334 generally 4, 5, 6 role of technology 62–63, 297–99, 315 at UNSW 26, 27–28, 58, 82, 316 self-assessment 160–61, 324 Academic Domain IT Strategy Committee standards-based assessment 327–29 66–67 student satisfaction with 248–52 academic leadership 351 sustainable innovation 352–55 academic literacy 161 typologies of 197–98 academic workloads 35–36, 68 see also Assessment Project, UNSW; academic writing 190 examinations accountability 3–6, 15 Assessment of Higher Education Learning accreditation 16, 116–17, 172, 344 Outcomes (AHELO) 16 Adaptive eLearning Platform (AeLP) 62, 243, Assessment Project, UNSW 255, 257 as an example of quality assurance 15 ADFA: The first 25 years... 270–71 appreciation of standards-based assessment AeLP 62, 243, 255, 257 327–29 AHELO 16 approaches to implementing change 49–68, AIBs 218–19 293–99 Altbach, P.G 6–7, 8, 13 assessment as learning approach 36–38 ALTC 149, 268 assessment resources for teachers 334 American Higher Education in the Twenty-First assumptions of 39–40, 339–40 Century 6–7 audits of assessment practices 291–93, 301 AOL 116–22, 124–25 capacity-building strategies 57–64 AQF 4–5 change management 68–72, 300–311 Arts see Faculty of Arts and Social Sciences conditions for sustainable innovation ASPIRE 19 352–55 assessable item banks (AIBS) 218–19 conversation between faculty members 299, assessment 329–30 administration of 63–64 dedicated faculty learning and teaching staff assessment tasks 138–39, 256, 318–19, 335–36 322–24, 340 dissemination of information about 53–57, best practice 77–78 302, 346–47 course level approach to improving 296–99 diversity of faculties 48–49 design of 356 efficiency and effectiveness outcomes effectiveness of 22, 39–47, 50–56, 58, 65, 314–25, 338 68–69, 71–72, 314–25, 356 evidence-based approach 46–48, 47 efficiency gains in 22, 26, 38–40, 43–46, faculty responses to 282–313, 342, 344 50–56, 58, 61, 314–25, 356 final conclusions 355–59 facilitation of 62–63 funding of 10, 21–22, 305–6, 345–46 feedback as part of 85, 86, 87 genesis of 35–38 governance of 57–59, 92–93, 95 impact of 336–38 inter-university comparisons 78, 79 improvements to feedback to students leadership of learning and teaching 59–61, 325–26 334–35, 348–49, 350, 355 improvements to marking 325–26 as learning 36–37, 36–38, 100 inducements to participate 51–53, 303, mapping of 172, 174–75, 201–3, 224, 225,

360

ImprovingAssessmentText2Proof.indd 360

11/11/13 3:36 PM

345–46 institutional capacity building 347–49 institutional outcomes of 314–38 intersection with Program Simplification initiative 17, 302 leadership of learning and teaching 59–61, 334–35 multi-focal approach 40, 42–45, 43 multi-year approach 40, 41–42 ongoing effect of 311–12 organisational arrangements for 288–91, 303–6 overview 2 policy instruments 344–50 provision of resources for staff and students 300 rationale of 39 role of technology 63, 331–33 scenarios of learning outcomes 36–37 staff resistance to 308–9 technological guidance 333 whole-of-institution approach 40–41, 50, 339, 341, 357–58 see also specific faculties Assessment Toolkit 334 Associate Deans (Academic) 288 Associate Deans (Education) 27, 29–30, 33, 288–90, 345, 347–48 Association to Advance Collegiate Schools of Business (AACSB) 16, 100, 116, 266, 283 assurance of learning (AOL) 116–22, 124–25 Australian Business Deans Council 7 Australian Defence Force Academy 263–64 Australian Graduate School of Management 122 Australian Graduate Survey 9 Australian Learning and Teaching Council (ALTC) 149, 268 Australian Qualifications Framework (AQF) 4–5 Australian Research Council 34–35 Australian School of Business 99–125 in the AACSB accreditation process 16 achievements from the Assessment Project 124–25 adoption of assessment for learning 100, 114, 123 assessment prior to Assessment Project 99–100 audit of postgraduate programs 109–14 audit of undergraduate programs 101, 102, 103–8, 104–5 Australian Graduate School of Management 122 Bachelor of Economics 122 Bachelor of Information Systems 121 challenges to improving assessment 123–24 conversation between faculty members

330–31 course level approach to change 296, 297 curriculum change 122 Educational Development Unit 344 efficiency gains 115–16 embedding changes with AOL 116–22, 125 ethics teaching 122 examinations 103, 104, 108, 113, 331 history of reviewing assessment practices 344 impact of AOL on 122 implementing the Assessment Project 101– 16, 302, 310 information dissemination 346–47 international students enrolled in 21 Master of Business and Technology 122 Master of Commerce 109–12, 110 Master of Finance 121 Master of Marketing 121 organisational arrangements for Assessment Project 290–91 in the organisational structure 26, 27 overview 99 response to the Assessment Project 282–83, 285–86, 308–10, 312, 336 role of program directors 120 student participation in assessment 324 types of assessment used 104–5, 106–8, 113 use of examinations 318 use of grading software 107–8, 115–16, 325 whole-of-program approach to change 296 workshops on assessment design 114–15 ‘authentic assessment’ 151 authority, distribution of 65 B2B Blueprint to Beyond: UNSW strategic intent 25, 33 Bachelor of Arts 77 Bastiaens, T. 151 Becher, T. 48 Beckett, N. 11, 13, 14 benchmarking 16, 17, 77–81 Berdahl, R. 6–7, 8 Bernholdt, L. 185 best practice, in assessment 77–78 beta testing 92, 92, 95 Biggs, J. 76, 210, 211 Blackboard 64, 182, 243, 325 Bolman, L.G. 44, 48 Bonanno, H. 161 Bradley, D. 4 Bradley Review 4, 8, 12 Brookes, M. 11, 13–14 Brown, F.G. 82 Bushell, G. 177, 178–79

Index 361

ImprovingAssessmentText2Proof.indd 361

11/11/13 3:36 PM

Business see Australian School of Business Business Domain Owners’ Advisory Group 66 business model, of universities 10 calibrated peer review 298, 299, 331, 332 Calibrated Peer Review 182 capstone courses 323 Carrick Institute for Learning and Teaching in Higher Education 60 Carroll, D. 118 CATEI system 31, 69, 84, 91, 115, 123, 230 CEQ 9, 69, 84 CEQuery 84 Chancellor 26, 26 change approaches to implementing 49–68, 293–99 contexts for 48–49 educational change 351, 354, 355, 357–59 management of 68–72, 300–311 reactions to 79–80, 145, 240, 301, 304–5, 308–9 theories and approaches to 340–44 class participation, marks for 107 class sizes 99 ‘closing the loop’ reports 118, 120 COFA see College of Fine Arts Coleman, L. 124 collaborative group work 205 College of Fine Arts 147–65 achievements in assessment practice 164–65 assessment types 153, 154 audit of assessment practices 150–56, 154–55 casual staff 148 digital literacy 155 implementing the Assessment Project 150– 62, 301–2, 310 improvements in assessment practices 156–62 number of assessment tasks 155, 156–57, 158–59 organisational arrangements for Assessment Project 289 in the organisational structure 26, 27 overview 147–49 program renewal 18, 162–64 Program Simplification Project 302 response to the Assessment Project 283, 286–87, 301–2, 307, 310 staff workshops on assessment 162 student participation in assessment 324 use of ReView 297, 325 whole-of-program approach to change 294 Colombo Plan 19 Committee on Education (UNSW) 27 Commonwealth Department of Education,

Employment and Workplace Relations (DEEWR) 9 constructive alignment 322 consultants see experts, external contingency approach 40, 343–44 continuous improvement 86, 116, 118, 120 co-operative group work 205 Corbett, H.D. 50, 54 course and teaching evaluation and improvement (CATEI) 31, 69, 84, 91, 115, 123, 230 Course Experience Questionnaire (CEQ) 9, 69, 84 course level approach, to improving assessment 296–99 course redesign 339–40 Cox, J. 182, 267 CPR 182 Craven, G. 5 criterion-referenced assessment 120–21 curriculum design 351 curriculum mapping in the Australian School of Business 102, 118–19, 120, 125 in the Faculty of Engineering 172 in the Faculty of Law 192, 198–99 in the Faculty of Science 256, 259 tools for 59 UNSW Canberra 268–72 curriculum reviews 44, 45 data collection 78–79, 80, 291–92, 348–49 data management assistants 102 data packs 134, 139 databases 242–43, 300 Davis, G. 5 de la Harpe, B. 154 Deal, T.E. 44, 48 Deans 26, 27, 30, 33, 50, 355 DEEWR 9 Deputy Vice-Chancellor (Academic) 26, 27, 28–29, 33, 50–51 Deputy Vice-Chancellor (Research) 26, 27, 33 diaries, of student time-on-task 84, 88–89 digital delivery, in education 13–14 Director of Learning and Teaching 28, 51, 290 dissemination see information dissemination Dokos, S. 185 dual award programs 17–19 Echo 360 technology 123 economic drivers, of change 8–11, 20–22 educational innovation 351, 354–55, 357–59 educational media systems (EMS) 67 effectiveness see assessment efficiency see assessment Ellmers, G. 149

362 Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 362

11/11/13 3:36 PM

Elmore, R.F. 50, 51, 53, 63, 65 employability skills 6–8, 15–17 EMS 67 Engineers Australia 16, 169, 172 e-portfolios 241, 253–54 equity, in education 11–12, 19–20 ERA 34 Ernst & Young 11, 13, 14 ethics 16–17 evidence-based approach 40, 46–48, 47, 343 examinations in the Australian School of Business 103, 104, 108, 113, 331 in the Faculty of Medicine 215–16 move from mid-semester exams 108, 331, 340 quality improvements 319–21 reductions in quantity and cost of 316–18 Excellence in Research for Australia (ERA) 34 experiential learning 204–5 experts, external 60, 70–71, 102, 134, 213, 235, 302, 350–51 faculties, generally arrangements for governance 29–30 diversity of 48–49 faculty program reviews 28 responses to the Assessment Project 282– 313, 342, 344 see also specific faculties Faculty Assessment Review Group 58 Faculty of Arts and Social Sciences 76–96 audit of assessment practices 77–81, 79, 80 course level approach to change 296 development of Online Assessment Tool 87–92, 92, 94, 94–95, 342 faculty assessment working party 83–84 implementing the Assessment Project 92–96, 94, 310 organisational arrangements for Assessment Project 289, 290 in the organisational structure 26, 27 overview of change 76–77 quality assurance 84–86, 87, 91, 95–96 response to the Assessment Project 282, 293, 309, 310–11 securing efficiency savings 81–83 time savings from change 95 use of mandates requiring change 303 Faculty of Business see Australian School of Business Faculty of Engineering 166–88 accreditation of 16, 169 achievements from the Assessment Project 187–88 assessment practices 171–72 establishing a rubric 181–82

feedback to students 170–71 female students 167 governance and management 167–69 implementing the Assessment Project 172–75, 310 individual school assessment projects 184–87 organisational arrangements for Assessment Project 288, 290 in the organisational structure 26, 27 overview 166–67 predominant teaching model 169–70 program assessment mapping 172–75 response to the Assessment Project 283, 293, 307–8, 310, 311 self-assessment 324 Thesis Assessment Project 175–82, 298, 326 use of Moodle 174, 178–80, 182–84, 293, 298, 325–26, 331–32 Faculty of Fine Arts see College of Fine Arts Faculty of Law 189–207 aims of the Assessment Project 191 aligning assessment with learning outcomes 198–99, 201–3 assessment mapping 201–3, 292–93 assessment typologies 197–98 audit of assessment practices 193–94 criteria-based feedback 197–98 curriculum review 199–200, 206 implementing the Assessment Project 192 introduction of new forms of assessment 204–7 Law School Assessment Survey 194–95, 196 Law School Survey of Student Engagement 195–96 literature review 196–97 organisational arrangements for Assessment Project 289, 290 in the organisational structure 26, 27 overview 189–90 program learning outcomes 199–200, 201, 202 program simplification 206 response to the Assessment Project 282–83, 284, 286, 293, 309, 337 student experiences of assessment 194–96 whole-of-program approach to change 294 Faculty of Medicine 208–33 assessment mapping 224, 225–26 assessment of clinical competence 216, 217–18 assessment of teamwork capability 220–21 audit of assessment practices 215–16 conversation between faculty members 329–30

Index 363

ImprovingAssessmentText2Proof.indd 363

11/11/13 3:36 PM

curriculum review 210 examinations 215–16 Exercise Physiology program 212, 213, 214, 217, 219 history of reviewing assessment practices 344 implementing the Assessment Project 208, 212–31, 233 Indigenous students 20 innovations in assessment practices 218–21 Master of Health Management 221–24, 225–26, 228 Master of International Public Health 221–24, 225–26, 228 Master of Public Health 221–24, 225, 228 multiple choice questionnaires 331–32 organisational arrangements for Assessment Project 288 in the organisational structure 26, 27 overview 209–10 postgraduate assessment 221–31, 225–26 promotion of scholarship in assessment 231–32 recommendations from the Assessment Project 230–31 response to the Assessment Project 284, 287–88, 306 skills development in communication and ethics 16–17 student survey on assessment practices 229, 229 teaching and assessment practices 210–12, 211 undergraduate assessment 214–21 whole-of-program approach to change 295–96 Faculty of Science 234–61 assessment innovations 252–54 assessment of laboratory skills 256–57 audit of assessment practices 237, 301 conclusions from the Assessment Project 261 database of assessment practices 242–43 feedback to students 257 governance 259–60 group work 257 implementing the Assessment Project 235–42 marking 260–61 organisational arrangements for Assessment Project 289 in the organisational structure 26, 27 outcomes of the Assessment Project 242– 48, 255–61 outreach to other faculties 254–55 overview 234 program review 246

response to the Assessment Project 287, 301, 309, 311 self-assessment 324 socialisation of teaching and learning 259–60 sophistication in assessment 258–59 student satisfaction with assessment 240– 41, 248–52 use of Moodle 241, 243, 252–55, 261, 298, 325–26, 331–32 variety of assessment tasks 319 whole-of-program approach to change 296 Faculty of the Built Environment 127–46 assessment types 137 conversation between faculty members 329 curriculum renewal 129–31, 135–36, 138–41 educational delivery 131–33, 132 implementing the Assessment Project 133–46, 310 Interior Architecture program 142 organisational arrangements for Assessment Project 290–91 in the organisational structure 26, 27 overview 127–28 postgraduate programs 129–30 program review process 17–18, 133–35, 138–39 research into assessment innovation 143–44 response to the Assessment Project 283, 307, 310, 311–12 review of assessment protocols 141–43 strategic repositioning 129–31 student participation in assessment 324 whole-of-program approach to change 294 faculty review of learning and teaching (FRLT) 28, 52 feedback, to students Arts and Social Sciences 83, 85, 86 Built Environment 131–32, 138, 143 Business 103, 107, 110, 112–16 criteria-based feedback 197–98 efficiency gains through technology 315 Engineering 170–71, 185, 186 Fine Art 150, 154–55, 157, 159, 160, 162 as a key performance target 51 Law 194, 195, 197–98 Medicine 219, 220, 226, 229, 229–32 outcomes of the Assessment Project 325–26 as part of assessment 36–37, 64 Science 245, 247–51, 257 fees, deregulation of 10–11 Fine Art see College of Fine Arts Firestone, W. 50, 54 Flood, A. 124 Foundations of University Learning and Teaching 31, 56

364 Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 364

11/11/13 3:36 PM

FRLT 28, 52 Fry, H. 76 Fullan, M.G. 41, 42, 48, 56, 68, 358 FULT 31, 56 funding of assessment reform 10, 21–22, 305–6, 345–46 cuts to (2013–14) 9 economic drivers of change 8–11, 20–22 Learning and Teaching Performance Fund 9–10, 21, 36, 51–52 public universities in the US 8 Gallagher, M. 5 GDS 9 Gibbs, G. 81, 84 Gidley, J. 12 Global Alliance of Technological Universities 25 global education 256 GlobalTech 25 governance of assessment 57–59, 92–93, 95, 347–48 of information technology 66–68 of UNSW 27–30, 45 government 3–6, 9, 12 GradeMark (grading software) increased take-up of 332 increased transparency through use of 299 as part of self-assessment 324 to provide feedback 299, 325 use by Australian School of Business 115–16 use by Faculty of Medicine 231 use by UNSW Canberra 272 in whole-of-institution approach 341 grading software see GradeMark; ReView; technology; Turnitin Graduate Certificate in University Learning and Teaching 56 Graduate Destination Survey (GDS) 9 group assessment 205–6 Group of Eight (Go8) 7, 17, 25, 176 group work 157, 221 Guillaume, D.W. 83 Gulikers, J. 151 Gumport, P.J. 6–7, 8 Harris, K.L. 8, 9, 10 Harris, M. 182 Hartel, C. 12 Hartley, P. 82 Harvard Business cases 115 Hazelkorn, E. 5 Heads of Schools 30 Herrington, J. 222

Herrington, T. 222 Higgins, R. 82 Higher Education Quality and Standards Agency (TEQSA) 4–6, 15, 177 higher education sector Bradley Report on 4 business model of universities 10 economic drivers of change 8–11 equity in education 11–12 funding cuts (2013–14) 9 global changes 2 ‘massification’ of 12, 99 political drivers of change 3–6 technology and education delivery 13–14 Hilmer, F. 5–6, 33–34 Hobsons 18 implementation phase, of change 68, 69 Indigenous Student Centre 19–20 Indigenous students 20 inducements 51–53, 303, 345–46 information dissemination 53–57, 302, 346–47 information technology see technology initiation phase, of change 68 innovation 352–55 Institute of Actuaries 106 Institutional Analysis and Reporting Office 69 institutionalisation phase, of change 68, 71–72 international accreditation 116–17 international students 10–11, 21, 148 internships 18 iPads, in assessment 219–20, 332 iRubric 183 IT Committee 66 IT Services (UNSW) 32 iUNSW RubriK App 95 James, R. 8, 9, 10 Jessop, T. 85 Jones, J. 161 Jones, P. 210 Kanapathipillai, K. 186 Ketteridge, S. 76 key performance targets (KPTs) 32–33, 50, 51, 345, 353 Khachikian, C.S. 83 Kirschner, P. 151 Kornfield, G. 243 Kotter, J.P 60 KPTs 32–33, 50, 51, 345, 353 labour market, changes in 7, 18 Lane, B. 5 Law see Faculty of Law

Index 365

ImprovingAssessmentText2Proof.indd 365

11/11/13 3:36 PM

Lawrence, G. 210 leadership, of learning and teaching 59–61, 334–35, 348–49, 350, 355 Learning and Teaching Enhancement Plan 69, 78 Learning and Teaching Fellows (LTFs) 290 learning and teaching, leadership of 59–61, 334–35, 348–49, 350, 355 Learning and Teaching Performance Fund (LTPF) 9–10, 21, 36, 51–52 Learning and Teaching Unit (LTU) 31, 54, 56, 61, 310–11, 333, 334–35, 356 Learning Centre 31 Learning Management Systems (LMSs) 64, 67 see also Blackboard; Moodle Leslie, G. 186 library 31 literacy skills 161 LMSs 64, 67 LTFs 290 LTPF 9–10, 21, 36, 51–52 Lutze-Mann. L. 243, 255 management of proposals and portfolio system (MAPPS) 28, 94 mandates 50–51, 303, 345 mapping see assessment, mapping of; curriculum mapping MAPPS 28, 94 marking rubrics 158, 162, 183–84, 198, 230, 315, 326 marking, time spent on 82–83, 90, 325–26, 357 Marshall, S. 124 Marshall, S.J. 42, 43, 76 ‘massification’, of higher education 12, 99 massive open online courses (MOOCs) 14 MASUS protocol 161 McDonnell, L.M. 50, 51, 53, 63, 65 MCQs 108, 113, 115, 255, 331–32 Medicine see Faculty of Medicine mentoring 18 Meyer, L. 210 Mitchell, E. 185 Mitra, R. 186 MOOCs 14 Moodle Calibrated Peer Review 298, 299, 331, 332 compared to ReView 326 compatibility with other software 64, 325 development of functionality 315, 324, 331 in the Faculty of Engineering 174, 178–80, 182–84, 293, 298, 325–26, 331–32 in the Faculty of Science 241, 243, 252–55, 261, 298, 325–26, 331–32 increased take-up of 332 to provide feedback 325–26

at UNSW Canberra 272 in whole-of-institution approach 341 Moodlemoot 184 MoodlePosium 254 multi-focal approach 40, 42–45, 43, 342–43 multiple choice questionnaires 108, 113, 115, 255, 331–32 multi-year approach 40, 41–42, 341–42 National Health and Medical Research Council 35 National Protocols for Higher Education Approval 4 National Qualifications Framework (UK) 3 Norton, A. 6, 7 Nura Gili 19–20, 28 objective structured clinical examination (OSCE) 217–18 Office of the Chief Scientist 255 online assessment 76, 77, 113, 159–60, 218– 19, 325–26 Online Assessment Tool (FASS) 87–92, 92, 94, 94–95, 342 online formative assessment 206 Organisation of Economic Cooperation and Development 16 organisational context 44–45 organisational development 352 Orrell, J. 138, 140, 142, 150, 152, 213, 235, 238–40, 255 OSCE 217–18 Paton, J. 240 peer feedback 143 peer review 261, 298, 299, 331, 332 plagiarism 161, 267, 273, 274–76, 284 policy frameworks, for assessment 340, 356 policy instruments 344–50 political drivers, of change 3–6 portfolio-type assessment 142–43 Posada, J.P. 182, 183 President 26, 26 professional development 300, 340, 351 program directors 120–21 program learning outcomes 199–200 program level approach, to improving assessment 294–96 Program Simplification Project 17–18, 162, 302 Programme Specifications (UK) 4 progressive assessment 106, 331 Pro-Vice-Chancellor (International) 28 Pro-Vice-Chancellor (Students) 28 quality assurance

366 Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 366

11/11/13 3:36 PM

in assessment practices 347–48 the Assessment Project as an example of 23 benchmarking in Engineering 15 Faculty of Arts and Social Sciences 84–86, 87, 91, 95–96 quality systems 3, 4–5 Quality Verification System 7, 17, 176 in the UK 3–4 university-initiated assurance systems 21–22 UNSW Canberra 287 in the US 3 Quality Assurance Agency Code of Practice (UK) 4 Quality Verification System (QVS) 7, 17, 176 QVS 7, 17, 176 Ramburuth, P. 12 The Rector 26, 27, 29, 33, 264 RED Framework 62–63, 333 RED Platform 62–63, 333 research 33–35, 352 research assistants 78 Research Evaluation and Development (RED) Framework 62–63, 333 Research Evaluation and Development (RED) Platform 62–63, 333 resources 171, 300, 334, 346, 356 ReView (grading software) compared to Moodle 326 increased take-up of 332 increased transparency through use of 299 for marking and feedback 299, 325–26 as part of self-assessment 324 use of by Australian School of Business 95, 107–8, 116, 118–19 use of by College of Fine Arts 159–61, 297 use of by UNSW Canberra 272 in whole-of-institution approach 341 Roberts, C. 185 Royal Australian Naval College 263 Royal Military College, Duntroon 263 rubrics see marking rubrics Ryan, Malcolm 186 SAAS 64, 67 Salmi, J. 7 scholarship of teaching and learning (SOTL) 77, 143 Science see Faculty of Science Scott, G. 84 self-accreditation 6, 15 self-assessment 160–61, 324 seminars 189 Senge, P. 40 service teaching 259

Simpson, C. 81 Skelton, A. 82 skills 6–8, 15–19 Smart Science 255 Smyth, J. 48 social equity, in education 11–12, 19–20 Social Sciences see Faculty of Arts and Social Sciences ‘soft skills’ 17 software see Grademark; ReView; technology; Turnitin SOTL 77, 143 staff development 300, 340, 351 staff time-on-task 82–83, 90, 95 standards assurance of learning standards 116–17 benchmarking of education outcomes 16, 17 course design and delivery 164 Go8 QVS standards 7, 17, 176 standards-based assessment 327–29 Student and Academic Administration Systems (SAAS) 64, 67 student numbers, uncapping of 10, 35 student participation 107, 321–22, 324–25 student satisfaction, with assessment 248–52 student time-on-task, regulation of 81–84, 88–89, 95 student workload 80–81 studio learning 132, 132, 149, 152 Studio Teaching Project (COFA) 149, 159 Subject Benchmark Statements (UK) 4 surveys Australian Graduate Survey 9 Graduate Destination Survey 9 Law School Assessment Survey 194–95, 196 Law School Survey of Student Engagement 195–96 student survey on assessment practices 229, 229 TESTA 84–86, 87, 91 UNSW Postgraduate Coursework Student Experience Survey 135 SWOT analysis 135 systems change 349–50 Tang, C. 76, 210, 211 task design 157 Teaching Gateway website 334 team-based group assessment 206 teamwork capability, assessment of 220–21 technology in education delivery 13–14, 22–23 effect of the Assessment Project on use of 331–33 efficiency gains through 315

Index 367

ImprovingAssessmentText2Proof.indd 367

11/11/13 3:36 PM

incorporation by Act 25 governance of information technology institutional arrangements for governance 66–68 27–29 grading software 107–8, 115–16, 118–19, management of teaching and learning 159–61, 230, 272, 297–99, 324 32–33 investment in information technology organisational structure 26, 26–27 65–66, 349–50 Postgraduate Coursework Committee RED Platform and Framework 333 (UNSW) 28 role in assessment 62–63, 297–99, 315 ranking of 34–35 to support assessment reform 310, 340, research versus teaching 35, 352 342, 347, 356 setting of quality standards 5, 15, 23 for teaching spaces 123 Student and Academic Administration 32 Technology Enabled Learning and Teaching 62 use of key performance targets 32 Technology Enhanced Learning and Teaching see also Assessment Project, UNSW; specific (TELT) 64 faculties TELT 64 UNSW Canberra 263–79 TEQSA 4–6, 15, 177 in the AACSB accreditation process 16 TESTA 84–86, 87, 91 audit of assessment practices 265 thesis assessment 175–82, 298, 326 conversation between faculty members 330 Thompson, R. 209, 231 courses offered 27 threshold learning outcomes (TLOs) 199 curriculum mapping 268–72 Times Higher Education World University diversity of entrants 270–71 rankings 35 institutional change through Assessment TLOs 199 Project 276–78 T-MEX 221 key performance targets 33 training see staff development laboratory-based assessment 273–76 Transforming the Experience of Students Learning and Teaching Advisory Committee Through Assessment (TESTA) 84–86, 266, 277 87, 91 Learning and Teaching Group 267 Tranter, P. 273 organisational arrangements for Assessment Trowler, P.R. 48 Project 289–90 Turnitin software 230, 274, 325 organisational structure 27 tutorial participation 107, 321–22 outcomes of the Assessment Project 278–79 overview 263–65 Undergraduate Studies Committee (UNSW) policy development and enforcement 266 28 response to the Assessment Project 284, United Kingdom 3–4 287, 307, 308, 311 United States 3, 6–7, 8 staff education and development 266–68 Universities 21 25 use of technology 272, 311 University Librarian 28 UNSW Postgraduate Coursework Student University of New South Wales see UNSW Experience Survey 135 university places, demand-driven model 10 UNSW values-based assessment 206–7 Advantage program 18–19 Vice-Chancellor 26, 26, 32, 50–51 annual budget 26 Vice-Presidents 26, 27 central support for learning 30–32 digital technology in education delivery 22–23 WebPA 182, 183, 324 drivers of change for 14–15, 20–22 whole-of-institution approach 40–41, 50, 339, dual award programs 17–19 341, 357–58 equity initiatives 19–20 World Bank 7 faculty arrangements for governance 29–30 World University Rankings 35 General Education program 77 growth in student numbers 35

368 Improving Assessment in Higher Education

ImprovingAssessmentText2Proof.indd 368

11/11/13 3:36 PM