Technology to Support Children's Collaborative Interactions: Close Encounters of the Shared Kind 3030750469, 9783030750466

This book explores how technology can foster interaction between children and their peers, teachers and other adults. It

106 103 2MB

English Pages 155 [148] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Technology to Support Children's Collaborative Interactions: Close Encounters of the Shared Kind
 3030750469, 9783030750466

Table of contents :
Acknowledgements
Abbreviations
Contents
About the Author
List of Figures
1 The Co-EnACT Collaboration Framework
1.1 What This Book Is About (and What It Isn’t About)
1.2 The Collaboration Framework
1.3 Interactions Through Technology: Physical, Cultural and Embodied
1.4 The Co-EnACT Framework of Mechanisms for Collaboration: Engagement, Attention, Contingency, and Control for Understanding Together
1.5 How Do Different Disciplines Characterise Collaboration?
1.6 Theoretical Perspective: Vygotsky
References
2 Engagement and Joint Attention
2.1 Levels of Engagement: Parten’s Play States
2.2 Transitions Between Modes of Engagement
2.2.1 Solitary to Onlooker Engagement
2.2.2 Onlooker to Parallel-Aware Engagement
2.2.3 Parallel-Aware to Associative and Cooperative play
2.3 Evidence on Transitions Between States of Engagement
2.3.1 Disengagement and Technoference
2.4 Awareness
References
3 Contingency and Control
3.1 Contingency and Control: Introduction
3.2 Collaboration on Tabletops: Single Versus Shared Control
3.3 Managing Control in Collaborative Tasks: Enabling, Encouraging or Enforcing
3.4 Contingency Underlies Control: The Example of SCoSS
3.5 Control: Voice Assistance
References
4 Shared Understanding
4.1 Constructing Shared Meaning: Development Through Early Childhood
4.2 From Attention Through Contingency and Control to Shared Understanding: The Augmented Knights’ Castle
4.3 Negotiating Shared Understanding: SCoSS Again
4.4 Diversity and Individual Differences: Other Forms and Means of Collaboration
References
5 Collaborative Technology in the Classroom
5.1 Technology and Collaboration in Schools: A Pessimistic Picture
5.2 Interactive Whiteboards
5.3 Tablets
5.4 Dialogue: Collaborative Discussion Needs Scaffolding
5.5 Cross-Device Collaboration and Classroom Orchestration
5.6 Research Designs for Cross-Device Collaboration
References
6 Autism and Technology for Collaboration
6.1 Definitional Issues: Autism and Collaboration
6.2 Guiding Collaboration with Verbally-Expressive Autistic Children
6.3 Encouraging Collaboration Through Exploration, with Verbally-Expressive Autistic Children
6.4 Encouraging Collaboration Through Exploration, with Minimally-Verbal Autistic Children
6.5 Guiding Collaboration Through Constraint in Minimally-Verbal Autistic Children
6.6 Autism and Collaboration: Altering Perspectives
References
7 Conclusion
7.1 The Co-EnACT Framework
7.2 The Wider Context
7.3 Collaboration as a Relationship
7.4 Collaboration In-Person and Online
References
Index

Citation preview

Technology to Support Children’s Collaborative Interactions Close Encounters of the Shared Kind Nicola Yuill

Technology to Support Children’s Collaborative Interactions

Nicola Yuill

Technology to Support Children’s Collaborative Interactions Close Encounters of the Shared Kind

Nicola Yuill Children & Technology Lab School of Psychology University of Sussex Brighton, East Sussex, UK

ISBN 978-3-030-75046-6 ISBN 978-3-030-75047-3 (eBook) https://doi.org/10.1007/978-3-030-75047-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: © Melisa Hasan This Palgrave Pivot imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To Will

Acknowledgements

I was truly fortunate that my induction into the field of HCI was guided and inspired by Professor Yvonne Rogers, with the ShareIT project our first very productive joint venture. My research tends to be highly collaborative, as is obvious from the many co-authors I’ve worked with. A great many children and their teachers contributed to the studies described here: research couldn’t happen without their willing involvement. Much of my own research has been funded by various external agencies, to which I am very grateful, and carried out in the kind embrace of the School of Psychology at the University of Sussex. I derive much inspiration and momentum from the lively and collaborative environment of the Children & Technology Lab, provided by many excellent undergraduate and postgraduate students, and most recently, my co-researchers Devyn Glass and Samantha Holt as well as the stalwart members over many years of the Embodied Cognition reading group and the Educational Psychologists Research Interest Group (now superseded by ACoRNS, the Autism Community Research Network Sussex). The tech developers on each project I’ve worked on have been invaluable both for their creative technical skills and in their real understanding of the ‘human’ part of HCI. Thanks also to the other tech experts we’ve consulted, including ETH Zurich for keeping the AKC working for many years, and the excellent team at Mangold INTERACT® video analysis software. I couldn’t have hoped for a more astute or generous reader of drafts than Rosalind Merrick. It is also providential that my sister Deborah has a gift both for

vii

viii

ACKNOWLEDGEMENTS

editing and for furnishing never a dull moment during the writing of this book. Of course I, rather than any of those listed, am responsible for any errors, omissions or faux pas herein. Some of the research in which I was involved, that is reported here, was funded by bodies including the Engineering and Physical Sciences Research Council UK, the Economic and Social Research Council UK, the Baily Thomas Charitable Trust and the National Institute of Health Research. The views expressed are my own.

Abbreviations

(Terms italicised are also in the Index) AKC AR ASC Co-EnACT framework

COSPATIAL

CPS CSCL CVE HCI ICT IWB KC LDASC PD

Augmented Knights’ Castle Augmented Reality autism spectrum condition collaboration: engagement, attention, contingency, control for understanding together Communication and Social Participation: Collaborative Technologies for Interaction and Learning Collaborative Problem-Solving Computer-Supported Collaborative Learning Collaborative Virtual Environment Human–Computer Interaction Information and Communication Technologies Interactive Whiteboard Knights’ Castle (standard unaugmented version) Learning-Disabled Autistic Spectrum Condition Participatory Design ix

x

ABBREVIATIONS

PECS PISA RFID RMC SCoSS SLANT SPRinG TD TRAC VIG VR

Picture Exchange Communication System Programme for International Student Assessment Radio-Frequency Identification Tag Reading Multiple Classification Separate Control of Shared Space Spoken Language and New Technology Social Pedagogic Research into Group Work Typically-Developing Talk, Reasoning and Computers (programme) Video Interaction Guidance Virtual Reality

Contents

1

The 1.1 1.2 1.3

1 2 3

Co-EnACT Collaboration Framework What This Book Is About (and What It Isn’t About) The Collaboration Framework Interactions Through Technology: Physical, Cultural and Embodied 1.4 The Co-EnACT Framework of Mechanisms for Collaboration: Engagement, Attention, Contingency, and Control for Understanding Together 1.5 How Do Different Disciplines Characterise Collaboration? 1.6 Theoretical Perspective: Vygotsky References

11 15 17

2

Engagement and Joint Attention 2.1 Levels of Engagement: Parten’s Play States 2.2 Transitions Between Modes of Engagement 2.3 Evidence on Transitions Between States of Engagement 2.4 Awareness References

21 22 24 28 32 36

3

Contingency and Control 3.1 Contingency and Control: Introduction 3.2 Collaboration on Tabletops: Single Versus Shared Control

39 40

6

8

42

xi

xii

CONTENTS

3.3

Managing Control in Collaborative Tasks: Enabling, Encouraging or Enforcing 3.4 Contingency Underlies Control: The Example of SCoSS 3.5 Control: Voice Assistance References 4

5

6

Shared Understanding 4.1 Constructing Shared Meaning: Development Through Early Childhood 4.2 From Attention Through Contingency and Control to Shared Understanding: The Augmented Knights’ Castle 4.3 Negotiating Shared Understanding: SCoSS Again 4.4 Diversity and Individual Differences: Other Forms and Means of Collaboration References Collaborative Technology in the Classroom 5.1 Technology and Collaboration in Schools: A Pessimistic Picture 5.2 Interactive Whiteboards 5.3 Tablets 5.4 Dialogue: Collaborative Discussion Needs Scaffolding 5.5 Cross-Device Collaboration and Classroom Orchestration 5.6 Research Designs for Cross-Device Collaboration References Autism and Technology for Collaboration 6.1 Definitional Issues: Autism and Collaboration 6.2 Guiding Collaboration with Verbally-Expressive Autistic Children 6.3 Encouraging Collaboration Through Exploration, with Verbally-Expressive Autistic Children 6.4 Encouraging Collaboration Through Exploration, with Minimally-Verbal Autistic Children 6.5 Guiding Collaboration Through Constraint in Minimally-Verbal Autistic Children 6.6 Autism and Collaboration: Altering Perspectives References

47 51 55 57 61 62

65 72 76 79 83 84 86 88 92 96 98 101 105 106 109 111 113 115 119 123

CONTENTS

7

Conclusion 7.1 The Co-EnACT Framework 7.2 The Wider Context 7.3 Collaboration as a Relationship 7.4 Collaboration In-Person and Online References

Index

xiii

127 127 129 131 132 134 135

About the Author

Nicola Yuill is Professor of Developmental Psychology and director of the Children & Technology Lab in the School of Psychology, University of Sussex, UK. Following her training as a psychiatric nurse, she took her first degree, Social Psychology with Cognitive Studies, in the School of Social Sciences, University of Sussex, and her D.Phil. in children’s social cognition under the supervision of first, Professor Keith Oatley, and then Professor Josef Perner, in the Laboratory of Experimental Psychology, School of Biological Sciences, at the same institution. She worked alongside ethologists at the Medical Research Council Unit on the Development and Integration of Behaviour, Cambridge, and then held academic posts in the Schools of Cognitive and Computing Sciences and of Psychology at Sussex. She is co-director of the Autism Community Research Network Sussex. She has published many journal papers and other works on the topics of children’s social cognition and social behaviour, children’s difficulties in reading comprehension and children’s collaboration through technology.

xv

List of Figures

Fig. 1.1

Fig. 1.2 Fig. 1.3 Fig. 2.1 Fig. 2.2 Fig. 2.3 Fig. 2.4 Fig. 3.1 Fig. 3.2 Fig. 3.3 Fig. 4.1

Fig. 4.2 Fig. 5.1

Different posture configurations reading from screens and books, with ‘vulture’ postures (upper) and ‘curling up’ postures (lower) Augmented Knights’ Castle (AKC) with sample sounds Cooperation (left) and collaboration (right) Different levels of engagement in relation to mechanisms of collaboration Likelihood of moving between play states in AKC (upper panel) vs KC (lower panel) for boys and girls Shared awareness on a multi-touch tabletop Patterns of touch by children positioned, respectively, on the left, centre and right side of a multi-touch table OurSpace tabletop design Two children completing the DigiTile fraction challenge on a multi-touch table Separate Control of Shared Space (SCoSS) compared to single representation Example INTERACT timelines (horizontal axis) for each child (1, 2, 3) and play type (onlooker, parallel, solitary, cooperative, respectively top to bottom on vertical axis) for a KC (upper) and AKC (lower) group The WordCat task showing one player’s task state Features of Talk Factory showing talk types (right-hand side), total for each type (upper left) and running timeline (lower left)

7 9 12 23 29 32 33 43 49 53

67 74

94

xvii

xviii

LIST OF FIGURES

Fig. 5.2 Fig. 5.3 Fig. 6.1

Children’s depictions of tablet use in class with large screen (left) and snack time (right) Screens for the Comfy Birds app Chatlab Connect dual-tablet app, based on Shared Control of Separate Space (SCoSS)

97 100 117

CHAPTER 1

The Co-EnACT Collaboration Framework

Abstract When trying to understand how collaboration and technology work together, the Co-EnACT framework considers: physical properties of the technology and environment; cultural meanings of the technology tools involved; bodily movements; and other non-verbal behaviour. The psychological mechanisms that support collaboration include: shared engagement, joint attention, contingent action, control, shared understanding and background assumptions. Children’s collaboration is addressed by the disciplines of Psychology, Human–Computer Interaction, Education and Computer-Supported Collaborative Learning. Vygotsky’s theoretical approach underpins these with its focus on: cultural artefacts that support collaboration; the role of language and embodied cognition in shaping understanding; and how more competent others scaffold learning. Keywords Collaboration · Culture · Embodied cognition · Tools · Scaffolding

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3_1

1

2

N. YUILL

1.1 What This Book Is About (and What It Isn’t About) This book is about how technology can foster the processes of interaction between children and their peers, teachers and other adults. It covers in particular how technology can support children collaborating with each other, so helping them to gain better understanding and engaged enjoyment of the world, in both work and play. Most of the work focuses on primary-age children, especially 7–11-year-olds, rather than young people, though the principles of supporting interaction are likely to apply to all life stages. Collaborative interaction involves both verbal and nonverbal behaviour: this book uses much evidence from closely analysing video observations of behaviour in relatively natural settings. Some of the software and tools used are experimental, but I aim where possible to extend the principles to technology that is more widely available, and to emerging technologies. I do not consider social media or online gaming. All the situations studied involve children interacting in the same place and at the same time. This book does not give recommendations for specific programs or software, or for explicitly training children to collaborate: it is much more focused on explaining principles behind using technology in ways that support and invite, rather than obstruct, social interaction, to enable better construction of settings that support collaboration. These principles are not just about the technology itself, but about the environment in which it is used. I have written with these audiences in mind: my fellow psychologists, who may not be familiar with other disciplines such as Human–Computer Interaction (HCI) that inform this area, researchers in HCI who are not so familiar with the social interaction aspects, particularly those designing with children in mind, and education and healthcare practitioners working with children, who want to extend their understanding of using technology for good. I have worked extensively with each of these groups. In particular, there is a chapter focused on autism, reflecting my own interests and experiences in this area. I believe that chapter will also be of broader interest to enquiring special needs practitioners and anyone else responsible for the education and care of children, because work in autism has brought new insights into our thinking on technology and collaboration. During the course of the book’s writing, the perception of technology has moved from malign thief of childhood to sanity-saving means of connection with others in times of isolation and restriction.

1

THE CO-ENACT COLLABORATION FRAMEWORK

3

The truth is somewhere in between: technology use is as much about context of use as about the technology or software itself, and understanding the principles and processes behind collaboration and technology helps people make better choices and adaptations. The book draws strongly on my own work, rather than being a systematic review of previous work, for two reasons: first, I’ve been fortunate to work across disciplines with talented researchers, to pursue the question of using technology to support collaboration, and second, I have a deep enough knowledge of this work to be able to explain in detail how evidence informs the collaboration framework I present, extending and developing from a joint paper with Yvonne Rogers (Yuill & Rogers, 2012). The Co-EnACT framework I develop is intended to act as a tool for other researchers to consider their technology designs and for reflective practitioners (I have met many) to arrange technology environments in ways that provide better opportunities for children to enjoy and learn through collaborative activities.

1.2

The Collaboration Framework

Collaboration is part of all interactions, whether work or play. It can be fulfilling and enjoyable in its own right, but is also a powerful driver of social cognitive development: children learn how to play, how to communicate, how to solve problems and how to interact equitably and sensitively with others through interacting with them. Collaboration is one means through which they develop all these talents. This book is about how technology can best be used to support collaboration in children, in face-to-face playful and learning settings. There might seem a fair-sized gulf between the intriguing, often playful, experimental technology developed and evaluated in some of the research studies described here and the everyday world of technology that exists in classrooms and homes. There is also much pessimism about the dangers of technology in replacing warm human interaction with solitary units lost in a screen, and about the prospects of teachers becoming experts in using technology to support collaboration. Neither of these pessimistic views has to be true. I start from the premise that technology can be as readily designed and used for collaboration as for isolated activity: indeed, people often end up subverting a designer’s aims by finding ways to share single-user ‘personal’ technology, as evidenced by anyone seeing two teenagers using an earphone each to share enjoyment of a song played on a phone, or

4

N. YUILL

a group clustered round a small screen in landscape mode to compose a text message together. More experimental technology, such as a large multi-touch surface, may not be available to us all, but looking at how its design can increase collaborative interaction helps us to consider how to use the technology we do have to best effect—using technology together rather than in isolation. The fear of ‘screen time’, characterised as disrupting relationships, trapping people in digital bubbles and removing the opportunity for children’s social learning, is justified to the extent that technology is designed and seen only as an individual activity. Technology at home can seem very convenient to use as an ‘electronic babysitter’. Even much educational technology focuses on supporting the individual child, with games, quizzes and online exercises, or on supporting managers in administration and assessment. These are all valuable in their own way, but a balanced diet needs to include interaction. By this, I don’t mean interaction with technology, but interaction through technology. Technology can be designed and used to encourage collaboration rather than to inhibit it, to support shared discovery, storytelling and cooperative play, maybe even in ways that couldn’t easily exist before the technology did. This kind of support for human connection can best be planned if we understand how collaboration develops and how technology can be designed to support interaction. I draw on research across three disciplines. Developmental psychology addresses what experiences and abilities children need in order to become full partners in cooperative interactions. Human–Computer Interaction (HCI) provides inventive and theoretically-motivated ideas about design to support shared engagement. Work in education is particularly informative on how group conversation can support growth in individual understanding. I also look especially at technology in autism, as work in this area has been instrumental in developing my own understanding of what collaboration involves. Appreciating how collaboration might work differently, and how ‘typical’ collaboration can be challenging, is a potent way to shed light on the different ways that people can engage with each other and on how technology for collaboration can be made more comfortable for everyone. This is also highlighted by involving people who do not use spoken language, to understand how behavioural synchrony is also fundamental in supporting the creation of shared meaning. In this chapter, I aim to present briefly a sense of what digital technology is and why it might be different from the other tools that

1

THE CO-ENACT COLLABORATION FRAMEWORK

5

have structured our interactions for hundreds of years. I then move on to interaction processes with the Co-EnACT framework: there are specific mechanisms, such as joint attention and construction of shared meaning, that collaborative interaction is driven by, and technology design can inhibit or support these. The chapter ends with a review of theoretical approaches in the three different disciplines just mentioned: psychology, HCI and education. Vygotsky’s theoretical perspective works very fittingly to tie together these fields and provides an underpinning for understanding the importance of collaboration in development. The next three chapters address how the different Co-EnACT mechanisms identified in this chapter can be supported through technology, looking at tangible technologies and screens large and small (tabletops and tablets), in pairs and small groups, and in work and play settings. In Chapter 5, I move on to looking at how different forms of technology might work with each other in larger groups, from a cross-device perspective that is being applied particularly to classroom learning and dialogue. Chapter 6 addresses the crucial question of collaboration in autism: this sharpens up our understanding of different forms that collaboration can take, and widens the scope of what counts as collaboration beyond the traditional literatures that focus heavily on spoken dialogue. This topic also enables a discussion of individual differences: we don’t expect everybody to be collaborating perfectly all the time, and different interactions work in different ways. A group of three children might work closely together in a classroom to solve a puzzle, a pair of non-verbal autistic children could share enjoyment of repeatedly replaying a video together, or a chance meeting of children in a playground might result in a new game being invented. By looking at processes such as shared understanding and synchrony, we can be open to considering different ways of collaborating, while still appreciating how design supports these processes, for children to be supported in technology-mediated collaborative interactions in ways that best suit their strengths and support their needs. Through most of the book, I consider collaborative technology in co-temporaneous, colocated interactions, rather than remote, asynchronous or solo experiences with technology, such as those commonly provided by social media, video production and consumption, gaming and solo virtual reality experiences. This chapter introduces the main themes of the book using two examples from my own work comparing interaction with and without digital technology: parent and child shared reading from books or screens, and small groups of children playing together with a standard or a technology-augmented playset.

6

N. YUILL

1.3

Interactions Through Technology: Physical, Cultural and Embodied

A mother and child sharing a book: does it matter whether the book is on paper or on screen? Many people will tell you that they somehow prefer paper, though can’t always articulate why. Is it a sentimental attachment to a physical book or are there tangible properties of books and screens that alter our experience? What factors do we need to consider to answer this question? To investigate this, we (Yuill & Martin, 2016) compared 24 children between 7 and 9 years old reading an illustrated chapterbook with their parents at home, with some of it read on paper and some on a tablet of similar size and format, using a very basic e-reading app, with the child and parent taking turns to read. We analysed the children’s recall and carefully examined video of the sessions, coding the interaction over time for changes in features, such as the warmth expressed between child and parent, and the degree of the child’s engagement with the story. There were differences between paper and screen, and these were quite subtle. In particular, interactions tended to be slightly lower in warmth with screen reading, and warmth dropped over time. So the differences were subtle, but did exist, and other research supports these points (e.g. Ewin et al., 2020; Munzer et al., 2019). What is it about the two settings that could have made a difference? First, there are the physical properties of the two media: the flexibility of a book, the greater weight of the screen. Sharing a book can be encouraged by opening out the pages to invite joint engagement with pictures and text, and we know that beginning readers learn a lot from such experiences (Bus & van Ijzendoorn, 1997). Second, there are the cultural meanings and affordances of the two different technologies: a book is a book, mostly used for reading (perhaps occasionally to prop open a door or weigh down papers), whereas a tablet is a multi-function device, mostly designed for individual use and for a multitude of different activities. We noticed that when we handed over the tablet, in contrast to the paper book, there was often an eager pair of child-size hands waiting to take possession of it. This implicit sense of ‘technology ownership’ by children was also clear in the interviews afterwards, with children expressing the idea that the tablet was particularly their domain of expertise (despite the fact that many of their parents worked daily with computers). The way people assign significance to physical artefacts alters how they interact

1

THE CO-ENACT COLLABORATION FRAMEWORK

7

with them. Third, there was distinctive bodily engagement in relation to the book and to the other person. We noticed in our study that children tended to adopt a sort of ‘vulture’ posture hunched over the tablet, which tended to make it harder for the adult to come in close, but with the paper book, children seemed to sit with more relaxed postures, nestling in with the parent more closely, or stretching out (Fig. 1.1). It is these inbuilt structures and practices that we need to scrutinise to understand how technology can alter interaction. Shared reading, curling up with a book, is also a telling reminder of the recent flowering of embodied approaches to cognition. In reaction to the tendency in traditional information-processing approaches to focus exclusively on the individual, or even just an individual brain, there is increasing recognition of what is termed the Four Es (Menary, 2010):

Fig. 1.1 Different posture configurations reading from screens and books, with ‘vulture’ postures (upper) and ‘curling up’ postures (lower) (Source Yuill and Martin [2016], https://www.frontiersin.org/articles/10.3389/fpsyg.2016. 01951/full)

8

N. YUILL

embodied, embedded, extended and enacted cognition. Individual cognition is not just ‘in the brain’, but is part of a whole-body experience, including our emotions, as expressed and felt in our bodily movements. The body, and physical objects, can function to extend our thinking in different ways, for example in how we can use our fingers or external devices such as paper or calculator in helping us to count. Even more so for collaboration, our thinking processes are ‘out there’, embedded in the external world, physical and social. How we think and act is both visible to others in our movements, and constrained by the social norms and physical world we live with. When considering technology for collaboration, we need to be mindful of the possibilities and constraints imposed by the devices we use. This can involve comparing the different experiences for a pair of children, for example, in working together using a single mouse to control a computer screen, sharing access to a touchscreen, small or large, or playing together with a whole set of technology-augmented objects. The possibilities for this interaction will also be framed according to the setting, such as school, home or laboratory.

1.4 The Co-EnACT Framework of Mechanisms for Collaboration: Engagement, Attention, Contingency, and Control for Understanding Together The shared reading study is an example of an adult supporting a child in a structured formal task using screen technology. I now move to a more unstructured play interaction between peers, not with screens, but using technology-augmented play equipment. This enables me to introduce the framework for understanding mechanisms of collaboration that I develop through this book, namely Collaboration through Engagement, Attention, Contingency and control to support understanding Together (Co-EnACT). Can technology change the way that children interact with each other in play? The Augmented Knights’ Castle (AKC) is a technologyaugmented Playmobil® medieval playset designed by Steve Hinske to support collaboration, and in particular, cooperative play (Fig. 1.2). Augmented Reality (AR) bridges physical objects and digital effects. In the case of the AKC, the play figures are fitted with tiny radio-frequency

1

THE CO-ENACT COLLABORATION FRAMEWORK

9

Fig. 1.2 Augmented Knights’ Castle (AKC) with sample sounds

identification tags (RFIDs) under their feet, which communicate the location of the figure to sensors in the base of the castle. The software enables appropriate speech and sounds to be played according to the location of the figure. For example, if a child moves a knight to approach the drawbridge, the knight might shout for the gates to be opened, or the dragon could roar as it senses a figure approaching. The research question that occupied us was: what difference could this make to children’s joint play? Surprisingly, quite a big difference: we compared 5–11-year-old children playing with the augmented set, or the same set with the technology switched off (Yuill et al., 2014). In our studies, we found double the frequency of cooperative play with the augmented set than the plain one. What sorts of mechanisms did we have to consider to explain this striking finding? First, we had to consider how children achieved shared engagement in the toy with their peers (or the screen with their parent in the reading study). When the knight announces that he will storm the keep, the children experience this event together: audio is a powerful way of providing an inevitably shared moment of experience, with mutual awareness as all hear, and are aware of hearing, the same event at the same time, in contrast to each child in their own individual space experiencing it, for example if they wore headphones.

10

N. YUILL

Second, we noticed the very important role of joint attention: with the AKC, the children spent plenty of time finding sound effects and creating their own, and were keen to display these to their peers. We saw many examples of bids for attention, with excited calls to ‘Look at this!’, while holding up a play figure. That joint attention underlies a flow of contingent action: that is, one person’s actions are responsive to the actions of the other: one child makes the knight attack the castle wall, another picks up the dragon to chase away the knight and a third rushes over with the king to help the knight defend the castle. The way this contingency works depends heavily on how different children can control what happens: in this case, control is highly distributed, since any of the play figures can be picked up by anyone and made to act. Underlying this interaction is children developing shared understanding together, in this case about what the characters’ motivations could be and how the joint play might evolve (see Chapter 4 for more detail). For the augmented castle, compared with the standard one, children tended to construct narratives that were more varied, spending less time on negotiating what the story would be about, and more time developing the story itself. In sum, they collaborated better when the castle set-up was augmented with the characters making sounds. Creating a shared story involved meshing together different ideas: evidence from the castle study suggests that shared understanding was achieved more readily in the augmented condition. In more structured tasks that involve problem-solving, children may also have an externally-imposed shared goal to reach. All this happened against a setting of shared background assumptions about joint action, for example that one person won’t just get up and leave the activity without explanation and that it’s not polite for one child to grab all the toys (Knoblich et al., 2011). Many theorists argue that children collaborate naturally, as an evolutionary adaptation that helped drive the powerful sociability of humans. The Co-EnACT framework that is developed through Chapters 2–4 helps us define co-located collaboration: it involves shared engagement in an activity with mutual attentional focus to co-create and maintain dynamically a shared understanding of the activity, pursued in ways that enable synchronised and contingent actions through shared control .

1

1.5

THE CO-ENACT COLLABORATION FRAMEWORK

11

How Do Different Disciplines Characterise Collaboration?

Research on children’s collaboration usually involves dyads or small groups and the ‘task’ can be something with a specific goal, such as ‘finish reading this chapter’ or something more open-ended, such as ‘playing together with the castle’. Relevant research into children and collaboration crosses several disciplines. These tend not to cross-refer, but here I synthesise these strands so as to develop a more rounded understanding: for example, when designing shareable technology in the field of HCI, it helps to have some understanding of the psychology of how collaboration develops, what supports it, and what collaboration in its turn enables children to do. Since so much of children’s collaboration happens in groups larger than the family—in the nursery or classroom, for example—we also need to draw lessons from educational research and computer-supported collaborative learning (CSCL) about how technology has been proposed to support co-working in school. Below, I look briefly at how each field tends to define and characterise collaboration, and what we can take from how each approaches the study of collaboration. CSCL research This field furnishes us with a valuable definition that has guided my own approach to children’s collaboration through technology. Roschelle and Teasley (1995, p. 70) specify collaboration as a “coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem”. They contrast this with cooperative problem-solving involving the division of labour: in that case, partners need not have reference to each other until the products of their individual work are combined (see Fig. 1.3). These authors’ analysis of the conversation of two young men solving a physics problem provides a classic example of how language and actions are analysed in the substantial literature on computer-supported collaborative learning (CSCL). The learners used language to check and create shared understanding, to resolve misunderstandings and disagreements, and to coordinate their physical activities. The computer simulation enabled them to try out ideas, to provide shared reference space and to constrain what interpretations were possible. The Psychology literature usually uses the term ‘cooperation’ rather than ‘collaboration’ and, unlike work in CSCL, no significance is attached to the distinction: the emphasis tends to be on skill acquisition rather

12

N. YUILL

Fig. 1.3 Cooperation (left) and collaboration (right)

than process development. Roschelle and Teasley’s definition of collaboration adds a crucial emphasis on the processes of continued coordination and shared understanding. Understanding these processes helps us think about what the mechanisms of development might be and how intervention can support those. Two approaches dominate the developmental literature: first, what are the prerequisites of cooperation and how early do they appear in individual development? I have drawn on this literature in proposing the factors of attention, engagement, and so on, listed above. The second approach derives from interest in the evolution of cooperation and the question of whether there are uniquely human attributes deriving from our evolutionary history that we can identify early in individual development. Evolutionary biologists such as Laland (2018) have written eloquently about how humans might have evolved collaboration as a particular specialism that fosters transmission of culture and hence creation of the complex features of human civilisations, with their physical structures—cities, pyramids, irrigation systems—and social institutions—world trading systems, legal structures, religions, supranational institutions. Moll and Tomasello (2007) see collaboration (or their term ‘cooperation’) as having a special role in development, in their Vygotskian intelligence hypothesis: a proposed evolutionary shift that “transformed human cognition from a mainly individual enterprise into a mainly collective cultural enterprise involving shared beliefs and practices,

1

THE CO-ENACT COLLABORATION FRAMEWORK

13

the foundation of cultural/institutional reality” (ibid., p. 646). These authors argue that “regular participation in cooperative, cultural interactions during ontogeny leads children to construct uniquely powerful forms of cognitive representation” (ibid., p. 639). This account informs my stance that collaboration is important both as a valuable activity in itself, but also a crucial mechanism for supporting social development and understanding (although I am less certain about some claims for human uniqueness). The question of when young children first show particular collaborative capacities, such as jointly attending, or understanding shared goals, often using highly-structured laboratory tasks, does not directly help in addressing how well children can collaborate and how technology might support this. The focus on early, presumably pan-human, attributes of collaboration puts individual, contextual and cultural variability into the shade (see also Leavens et al., 2019). How do technological, social and other environmental circumstances of a setting elicit or inhibit particular cooperative behaviours, and how can we design to maximise these? As we have seen, even simple tweaks such as using a digital or paper book, or adding sound augmentation to a toy, can change interaction within dyads and small groups. Human–Computer Interaction This is where HCI is of special value. It is itself a multi-disciplinary field focusing in general on how humans interact with computers (and technology more broadly) and specifically on designing better systems for a whole range of purposes. This focus on design, and the affordances of the environment and tools, is something that can often be invisible or unexamined in psychological research, especially research in the lab. HCI has a strong focus on innovation and often playfulness, particularly in work with children: for example, the innovative Ambient Wood project (Rogers et al., 2002) equipped woodland in Sussex with technology-augmented objects to support children learning together about biology and ecology. A common HCI approach is to make qualitative, ethnographic and relatively open-ended observations, sometimes in everyday settings such as schools. The work by Benford and colleagues (Benford et al., 2000) co-designing collaborative storytelling technologies such as KidPad, used in UK and Swedish schools, is a classic and well-cited example. There is also a strong emphasis on ‘research in the wild’ (Rogers et al., 2013) deriving from anthropological work on ‘cognition in the wild’, to study how people use technology in authentic everyday settings, rather than in

14

N. YUILL

unfamiliar and artificially-controlled laboratories, given that technology needs to be designed to be closely embedded in everyday lives. The thrust of much of this work is how to design technology to improve collaboration in face-to-face settings. There is abundant work involving children of primary age, with consideration of the physical and cultural factors mentioned above, and I cover some of that here. Education The education literature on collaboration also provides rich accounts of contexts that promote collaboration in groups in classrooms. A primary focus has been on collaborative dialogue, as expounded by projects such as Thinking Together (Dawes et al., 2000). As we will see in Chapter 5, there is ample evidence that dialogue is a very important mechanism through which children develop their understanding of complex concepts, and there is also a good understanding of how teachers’ talk will best support the stimulation of this sort of classroom talk. A particularly striking illustration of this is Talk Factory (Kerawalla et al., 2013), which enables classes to create and view a rolling graphic display of the sort of talk happening during a lesson. There is a focus on authentic settings, primarily the classroom, and a range of work across both primary and secondary education contexts. Methods include both quantitative evaluations of learning in whole-class interventions and more qualitative analyses and case studies tracing the ebb and flow of shared understanding as a process, rather than an outcome, as in Roschelle and Teasley’s work mentioned above. The focus in this work on analysis of dialogue means that factors in the physical environment and context have been less central in comparison with work in HCI. However, the work in real classrooms is a salutary reminder of the limitations imposed by what technology is available. Occasionally these different disciplines have been powerfully combined, as for example in the work by Howe, Tolmie and colleagues in the 1990s on how computer-mediated collaborative classroom dialogue shapes learning of scientific concepts (Howe & Tolmie, 1999), and Crook’s book in the same period on computers and collaborative learning (Crook, 1996). Since those times, a whole raft of new, widelyavailable technologies, notably touchscreens, have appeared, and continue to appear, meaning that collaboration can happen more fluidly and be made accessible to children across a much wider range of educational needs. Despite this, these technologies still tend to be designed, and viewed in everyday life, as devices for individuals.

1

1.6

THE CO-ENACT COLLABORATION FRAMEWORK

15

Theoretical Perspective: Vygotsky

Different disciplines can complement each other, as identified above, but sometimes trying to coordinate different theories can be a challenge. Fortunately, there is a broad theoretical perspective that underlies much of the thinking in HCI, developmental psychology, education and CSCL: the work of Vygotsky (see Wertsch, 1988). His writings date from before any digital revolution, but are especially well-suited to looking at children’s collaboration through technology, because of his emphasis on how the tools we use in everyday life shape thinking, and on the role of social interaction in children’s development. In this section I account briefly for how this approach underpins (i) the role of design of cultural artefacts, in this case technology, as crucial tools shaping human interaction; (ii) psychological development as proceeding from inter- to intra-psychological (from social to individual); through (iii) the role of pedagogical guidance and support of a more able other in the zone of proximal development; and (iv) the analysis of dialogue as an engine of individual cognitive development. Each is briefly explained below. Well before the advent of digital technology in everyday life, Vygotsky understood that cultural artefacts shape our behaviour and thinking. Children are born into a designed world. For the most basic functions such as feeding, different cultures may use spoons, fingers or chopsticks, which the child has to master. Each of these practices involves a different set of physical objects and social practices. The way technology is designed powerfully frames the way we interact with it, so for example a mobile phone, being small, easily grasped in the hand and containing large amounts of personal information, conduces to solo activity. Of course, this doesn’t completely prevent people from subverting this individual use: children in particular can be seen crowding closely together to share a phone image, but design makes some sorts of interaction easier than others. Even the orientation of a tool, such as working on a surface that is horizontal, such as a smart table, or vertical, such as a whiteboard, can make a striking difference to how people work together (Rogers & Lindley, 2004). Vygotsky was ahead of his time in understanding that development is intimately tied up with the tools involved in a task. Underlying the roles of artefacts is the idea that development proceeds from inter- to intra-psychological: goals such as feeding oneself are first achieved socially, with child and caregiver operating together through pedagogical guidance, in the zone of proximal development (ZPD). The

16

N. YUILL

end result is that the child becomes able to perform the task independently. The process through which this happens is described by the powerful metaphor of scaffolding (Wood et al., 1976): more experienced people (such as parents, teachers or more able peers) provide a structure to support children learning a skill, giving just enough timely help, and gradually withdrawing support so that the ZPD is extended, until the child can complete a task alone. Clearly, this in itself is a collaborative process according to our definition. This scaffolding is dynamic, relying on constantly shifting shared understanding, meaning that it is hard to develop technology to provide and withdraw support sensitively (Sharma & Hannafin, 2007) or to have students choose their own scaffolding through hints (Harris et al., 2009). However, technology can provide an environment that helps children to work or play together through the participants scaffolding each other, by means of supporting the mechanisms identified earlier: engagement, attention, control, contingency and shared understanding, as the following chapters describe. In the examples I describe, technology is there to support interaction between humans, rather than to replace it: interacting with each other through technology rather than interacting with technology. The crucial role of language has already been mentioned: as Roschelle and Teasley say, “the most important resource for collaboration is talk” (Roschelle & Teasley, 1995, p. 94). For Vygotsky, language is a tool that supports symbolic thinking. Articulating and debating ideas with other people doesn’t just express our internal thought processes: it is part of the process of developing understanding. Inner speech also plays a central role in Vygotsky’s idea of how we regulate our behaviour. Does this mean that people who do not have expressive language can’t collaborate? Humans are not the only species to collaborate: many social species such as ants and termites, not to say our primate cousins, excel in working together, apparently without using symbol systems. Many proponents of embodied cognition (see Chapter 6) would reject the idea that language is crucial for shared understanding. De Jaegher and colleagues (e.g. Fantasia et al., 2014) point to the fact that from birth, infants coordinate their activity with their caregivers. In these authors’ enactive approach to cognition, we make sense of the world by means of moving within it. Coordinating action therefore amounts to generating a shared understanding that does not need to be created through language or symbols. Research into synchrony of action and work using technology

1

THE CO-ENACT COLLABORATION FRAMEWORK

17

to support collaboration in minimally-verbal autistic children (discussed in Chapter 6) is arguably an illustration of this. In sum, this extended Vygotskian approach leads us to address the question of how technology can support children’s collaboration by asking: • How does the technology set-up support the mechanisms of collaboration we identified? • How does the interaction between participants shape their use of the technology? • What do conversations tell us about how shared understanding supports individual learning? • How might technology influence coordination of movement to support shared understanding?

References Benford, S., Bederson, B. B., Åkesson, K. P., Bayon, V., Druin, A., Hansson, P., Hourcade, J.-P., Ingram, R., Neale, H., O’Malley, C., Simsarian, K., Stanton, D., Sundblad, Y. & Taxén, G. (2000). Designing storytelling technologies to encourage collaboration between young children. Conference on Human Factors in Computing Systems—Proceedings, 28(99), 556–563. Bus, A. G., & van Ijzendoorn, M. H. (1997). Affective dimension of mother– infant picturebook reading. Journal of School Psychology, 35(1), 47–60. Crook, C. (1996). Computers and the collaborative experience of learning. Psychology Press. Dawes, L., Mercer, N., & Wegerif, R. (2000). Thinking together: A programme of activities for developing thinking skills at KS2. Questions Publishing Company. Ewin, C. A., Reupert, A. E., McLean, L. A., & Ewin, C. J. (2020). The impact of joint media engagement on parent–child interactions: A systematic review. Human Behavior and Emerging Technologies, 3(2), 230–254. Fantasia, V., De Jaegher, H., & Fasulo, A. (2014). We can work it out: An enactive look at cooperation. Frontiers in Psychology, 5, 874. Harris, A., Bonnett, V., Luckin, R., Yuill, N., & Avramides, K. (2009). Scaffolding effective help-seeking behaviour in mastery and performance oriented learners. In 14th International Conference on Artificial Intelligence in Education (pp. 425–432). IOS Press.

18

N. YUILL

Howe, C., & Tolmie, A. (1999). Productive interaction in the context of computer-supported collaborative learning in science. Learning with Computers: Analysing Productive Interaction, 24–45. Kerawalla, L., Petrou, M., & Scanlon, E. (2013). Talk Factory: Supporting ‘exploratory talk’ around an interactive whiteboard in primary school science plenaries. Technology, Pedagogy and Education, 22(1), 89–102. Knoblich, G., Butterfill, S., & Sebanz, N. (2011). Psychological research on joint action: Theory and data. Psychology of Learning and Motivation, 54, 59–101. Laland, K. N. (2018). Darwin’s unfinished symphony: How culture made the human mind. Princeton University Press. Leavens, D. A., Bard, K. A., & Hopkins, W. D. (2019). The mismeasure of ape social cognition. Animal Cognition, 22(4), 487–504. Menary, R. (2010). Introduction to the special issue on 4E cognition. Phenomenology and the Cognitive Sciences, 9(4), 459–463. Moll, H., & Tomasello, M. (2007). Cooperation and human cognition: The Vygotskian intelligence hypothesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 362(1480), 639–648. Munzer, T. G., Miller, A. L., Weeks, H. M., Kaciroti, N., & Radesky, J. (2019). Parent–toddler social reciprocity during reading from electronic tablets vs print books. JAMA Pediatrics, 173(11), 1076–1083. Rogers, Y., & Lindley, S. (2004). Collaborating around vertical and horizontal large interactive displays: Which way is best? Interacting with Computers, 16(6), 1133–1152. Rogers, Y., Price, S., Harris, E., Phelps, T., Underwood, M., Wilde, D., Smith, H., Muller, H., Randell, C., Stanton, D., Neale, H., Thompson, M., Weal, M. & Danius, T. (2002). Learning through digitally-augmented physical experiences: Reflections on the ambient wood project. https://eprints.soton.ac.uk/ 428865/ Rogers, Y., Yuill, N., & Marshall, P. (2013). Contrasting lab-based and in-thewild studies for evaluating multi-user technologies. In The SAGE handbook of Digital Technology Research (pp. 359–373). Sage. Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem solving. In Computer Supported Collaborative Learning (pp. 69–97). Springer. Sharma, P., & Hannafin, M. J. (2007). Scaffolding in technology-enhanced learning environments. Interactive Learning Environments, 15(1), 27–46. Wertsch, J. V. (1988). Vygotsky and the social formation of mind. Harvard University Press. Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17 (2), 89–100.

1

THE CO-ENACT COLLABORATION FRAMEWORK

19

Yuill, N., Hinske, S., Williams, S. E., & Leith, G. (2014). How getting noticed helps getting on: Successful attention capture doubles children’s cooperative play. Frontiers in Psychology, 5, 418. Yuill, N., & Martin, A. (2016). Curling up with a good e-book: Mother–child shared story reading on screen or paper affects embodied interaction and warmth. Frontiers in Psychology, 7 , 1951. Yuill, N., & Rogers, Y. (2012). Mechanisms for collaboration: A design and evaluation framework for multi-user interfaces. ACM Transactions on Computer-Human Interaction, 19(1), 1–25.

CHAPTER 2

Engagement and Joint Attention

Abstract Children’s engagement is identified at four main levels: solitary, onlooking, parallel-aware and cooperative. These levels differ in the degree of shared attention, contingency, control and shared understanding involved. Design influences how children transition between different levels. Initial engagement involves creating potential entry points to draw children in. Augmented objects can increase the likelihood of transition to more collaborative interaction through sound and vision, yet technology may also interfere with engagement. Tabletops and digitallyaugmented objects can work well for collaboration through supporting shared awareness. The same factors in engagement can be considered for using technologies that are more widely available. Keywords Engagement · Entry points · State transitions · Interference · Awareness

In this chapter, I focus on how technology can support children into the initial stages of collaborative interaction: engagement and attention. To help in thinking about different levels of engagement, I use a categorisation of play patterns that maps quite neatly onto the Co-EnACT framework of collaboration presented in Chapter 1. I then introduce ideas from HCI about design principles for getting people involved with © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3_2

21

22

N. YUILL

technological artefacts. The Augmented Knights’ Castle (AKC), introduced in Chapter 1, provides a perfect example of how design features can draw children in, and helps in thinking about transitions in the ebb and flow of collaboration: how children move from one sort of engagement to another, whether that is towards closer collaboration or to disengagement. I also discuss how technology can interfere with engagement.

2.1

Levels of Engagement: Parten’s Play States

‘Engagement’ can mean anything from the very first stepping over a threshold to an all-encompassing immersion with another person. The term is used very differently in different literatures: in HCI, it is often viewed as the initial capture of interest that gets a ‘user’ to approach the technology, while in education and psychology, it more often means the sense of deep absorption in a task. In all three fields, engagement can involve solo absorption with technology: sometimes that is the purpose. For our current purposes, though, the focus is on shared engagement (hence, social participation) with other people through the technology rather than with the technology: the technology is intended as a facilitator of collaboration rather than the focus of attention itself. A long-standing taxonomy of levels of participation, developed by Parten (1932), as modified by Robinson et al. (2003), is useful for our purposes for two reasons: it was derived from careful direct observation over several months of young children’s free play in a nursery setting, and it focuses on the degree of social engagement. Further, the shifts between increasingly social levels mirror fairly closely the factors identified in collaboration in Chapter 1, as shown in Fig. 2.1. Parten expressed very clearly the need to distinguish between a group of playing children who are merely a ‘congregation of individuals’ (ibid., p. 248) and children being active members of a social group with certain obligations to it. It’s important to remember that the different states are not a hierarchy: children move fluidly back and forth between the different states and no one would be expected to be in a state of cooperative play 100% of the time. I have kept the term ‘cooperative play’ here because that is so commonly used in this literature, but such play generally involves the same mechanisms as collaboration, in the sense of involving the continued negotiation of shared meanings. Cooperative play is important for the same reasons that collaboration is, in supporting the

2

ENGAGEMENT AND JOINT ATTENTION

Mechanism

Engagement?

Shared attention?

Contingency?

Shared understanding?

Level of participation solitary onlooker parallel-aware1 associative cooperative

no yes yes yes yes

no no* yes yes yes

no no* no* yes yes

no no* no* no* yes

23

Fig. 2.1 Different levels of engagement in relation to mechanisms of collaboration (Note * These features might be fleetingly available at this level, as play ebbs and flows, but are not needed for the level of engagement specified. 1 I use the Robinson et al. [2003] definition of parallel-aware play, where children do not play with each other, but there is clear mutual awareness, in place of Parten’s simple ‘parallel’ label)

emotional and cognitive benefits of experiencing shared understanding, and also for its longer-term consequences in supporting the growth of other social–cognitive proficiencies. The least engaged state is solitary play, where a child acts alone with no acknowledgement of others. Clearly, this involves neither engagement with nor attention to others. Moving to onlooker behaviour, a child is certainly engaged, simply through passive observation of others’ activity, though without taking any part: there is no ‘jointness’ in this attention. It is quite common in the play of young children of around 2–3 years (Barbu et al., 2011) and there is good reason to think that this sort of watchfulness from the sidelines can be valuable, particularly for more inhibited children (Bakeman & Brownlee, 1980), for example as a way of scoping out and reflecting on what’s happening and getting comfortable with a situation. This onlooking contrasts with parallel-aware play, where activities are pursued side by side, though with no clear mutual influence. This adds the factor of shared attention to engagement: each player is aware of the activity of other players, and likely also to be aware of the others’ awareness. This awareness is often thought of as visual, but could be in other modalities such as auditory. Awareness has more general aspects beyond joint attention, and so has its own section (Sect. 2.4). Parten’s next state, associative play, involves children playing together without obvious structure or coordination. This kind of play will certainly involve contingent actions, such as one child offering a toy to another, exchanging toys and imitating actions. The addition of ‘togetherness’,

24

N. YUILL

a broad shared understanding of what the shared play is about, brings truly cooperative play, where there might be shared norms, as in a pretend scenario with expectations about roles, or rules such as a structured game of chase. This cooperative play involves the same features used in the definition of collaboration in Chapter 1. We’ve already seen that the AKC supports twice as much cooperative play as a non-augmented version of the same toy, and AKC groups created imaginative and well-connected narratives. That makes it an important quest to discover what factors bring children to these different levels of engagement: mechanisms supporting transition. Much of the psychological research in this area has focused on macro-developmental sequences over years and on individual differences in engagement states. HCI research, though, provides useful models, at a more micro level, of how the design of technology can draw users together in collaborative activities within the course of an interaction. In particular, Hornecker et al. (2007) provide a useful set of concepts for understanding how shareability of technology comes about. These concepts apply well to understanding how we can use technology to support children’s engagement. Although transition is not always a straight pathway up and down the adjacent levels of engagement shown in Fig. 2.1, the levels provide a useful means of considering what prompts shifts between the different states, with the aim of seeing how technology design can draw children in to collaboration.

2.2

Transitions Between Modes of Engagement 2.2.1

Solitary to Onlooker Engagement

Entry points : To get a passer-by to notice and engage with a piece of ambient technology in the first place, there need to be entry points. There are three types identified by Hornecker et al. First is the ‘point of prospect’: users of a technology need time and space to work out what is involved, whether it might be something they want, and are able, to engage with. Having a way to scope out what a technology involves helps users to step over the initial threshold. This step can be particularly important for children who are inhibited, wary or unfamiliar with the technology: being an onlooker has its natural place in progressing towards collaboration.

2

ENGAGEMENT AND JOINT ATTENTION

25

A benefit of the AKC is that it was a familiar type of object to the children—action figures and a model castle—and in the setting we used, a school library, it was clear that the object was for public use, with lots of different figures, so children could feel they were allowed to go over and pick up a figure without needing to ask permission. Children are cued into rules about ownership from an early age (Blake & Harris, 2009), even if ownership is not always respected. Quite minor cues can signal what’s permitted, and assumptions that technology is ‘personal’ can interfere with collaborative use. For example, in one study where we tried to use tablets as shareable and swappable devices for groups of four, we used a distinctive-coloured protective cover for each one, to help us in later video analysis. Unfortunately, with the very first set of participants, one child grasped a tablet firmly, announcing ‘mine’s the red one’, meaning the other children turned away to find their ‘own’ personal colour—not at all what we had intended. Similarly, younger people might see a specific technology as their domain rather than that of adults: we noticed this in our shared reading study, with some children feeling comfortable to take charge of the tablet in a way they did not do with the paper book. In contrast, it’s not always clear what ownership children have of space on a classroom whiteboard. Norms define quite powerfully how shared access works: for example, one school might identify and label specific chairs, or coat pegs, and even young children call out violations of ownership, but in other schools, these items are public goods (Huh & Friedman, 2017). Schools are often the first institution a child attends, and months of experience of nursery, rather than age, seem to predict how well children understand such conventions: in a neat study by Siegal and Storey (1985), nursery ‘veterans’ understood the rules better than older children who had just joined. So permission, ownership and feeling welcome are important first steps into getting initial engagement. Once an initial prospect is facilitated, the most engaging designs produce what Hornecker et al. call a ‘ honeypot effect ’: if other people are clustering round, that suggests there is something interesting to be experienced. The power of visual attention can be particularly strong. Literature on conformity shows that there is a strong motivation to engage, especially bodily, with what other people are attending to, and to have a sense of ‘being in this thing together’: we only need to see a few other people looking up at a building to be unthinkingly moved to follow their gaze (Coultas & van Leeuwen, 2015).

26

N. YUILL

Honeypot effects can be produced by different sensory modes (such as an enticing smell of fried onions leading to a food stall, or hearing the chimes of an ice-cream van), but also by people’s prior knowledge. For example, we had no trouble recruiting 262 family groups in just a few hours at a science festival, because we had acquired some of the first iPads to appear in the UK, and people were keen to have a go with what was then a novelty (Yuill et al., 2013). On the other hand, there need to be minimal barriers to engagement: if something fails to attract, or, at further stages of engagement, involves the need to log in, to wear bulky items such as headsets, to wait to charge batteries or to figure out a complex set of unnatural actions, then even initial engagement becomes less likely. A particularly fine example of a progressive ‘honeypot’ is in the virtual reality (VR) game Lands of Fog (Mora-Guiard et al., 2017) designed with autistic children in mind (see Chapter 6). Participants wear VR headsets to wander round a foggy land exploring what might be there. They carry a glowing ‘mosquito net’ that allows them to interact with the virtual world by catching bright virtual fireflies, that ‘dance’ in a greeting pattern when they approach fireflies caught by another player. As the player approaches the fireflies, the lights can move tantalisingly away, and towards another player’s fireflies, meaning that players are supported to move towards each other. 2.2.2

Onlooker to Parallel-Aware Engagement

Shifting from being an onlooker to working or playing in parallel involves attention to other participants. This could involve one person attending to another who is oblivious to them, or a more interactive joint attention where each participant becomes aware of the other. This shift can be supported by access points, the next step in Hornecker et al.’s model of engagement. These are factors that move people to get involved after their initial approach to a device, display or task. Access points include perception and manipulation: what the potential user can see and hear, and what they feel able to do. Again, the AKC provides an example of inviting access: the children involved knew very well that the action figures were for children, and there were many figures available, so no scarcity of resources and little chance of conflict. The figures also suggest actions and potential narratives that the children knew about, even if they had only a shaky grasp of history. One original vision of the AKC is that

2

ENGAGEMENT AND JOINT ATTENTION

27

it could also act as an educational device, as the figures could provide spoken information about life in medieval times. Hinske et al. (2009) completed a study of the AKC compared to an unaugmented version with over 100 children, to see whether children would learn about the middle ages just through being incidentally exposed to some of the characters mentioning facts about medieval life (e.g. ‘My sword is worth as much as seven cows’). Children playing with the augmented set including such information showed more learning on post-assessment questions than the children with the plain set, and maintained an advantage on re-assessment even two months later, with almost no drop in performance from the immediate post-play assessment. It’s worth noting that, based on the experience of many informal studies with the playset, we found that there is an art in having enough figures, and enough sounds, but not too many. Large numbers of sounds and objects run the danger of silencing the children’s voices or providing too much conflicting noise. It’s easy to overdo attention-grabbing factors, especially if they are not related to the activity the designer wants to encourage. A classic example are the ‘fun’ games that sometimes come with e-books for children: one study found that children spent 43% of their time playing the games and little time actually engaging with the text (De Jong & Bus, 2002). Working out what helps parallel engagement is not an exact science: some imagination, trial and error, and reflection on close observation of what children do in a specific situation are all tools for designing set-ups to support it. 2.2.3

Parallel-Aware to Associative and Cooperative play

This shift involves a move towards actions that are more contingent on each other than in parallel activity, in other words, closer contingency (one person’s action or speech relating to another’s action) towards the development of shared understanding. This relates to the final access point in the Hornecker et al. (2007) model, of fluidity: “how easily people engaged in shared interaction with a system can switch roles or interleave their actions, handing over control, continuing somebody else’s action at mid-point or inserting something into it” (ibid., p. 336). There will be more to say about control specifically in Chapter 3, but fluidity in movement and action can be achieved in many different ways, depending on how structured a setting is. Every teacher who sets up classroom furniture in a circle or in rows, or wants to do groupwork in a lecture theatre with

28

N. YUILL

fixed bench seating, understands how fluidity of movement changes the ways people engage with each other. In the AKC study, we used two different versions of the set that differed in how they supported fluidity of movement. The first study, carried out in Germany with the first prototype set, involved a single, large, rectangular base and the children, in pairs or threes, could move round the edge, though the size meant that they were sometimes a fair distance from the kit and from each other, and could not reach across the whole base. The second study, in the UK, used a set with three smaller ‘island’ bases, placed with enough room to move round and between them but close enough that children were very easily able to show, exchange and coordinate movement of objects. Fluidity on its own doesn’t guarantee coordination. The layout did not prevent them from playing separately: one triad in the non-augmented group ended up each playing, with their backs to each other, on separate islands. This is where the role of other features of entry and access is important: having the interesting sound effects in the augmented set appeared to be a powerful way of getting engagement and attention between children, and the islands helped them to be closer together so that visuals and sounds were easy to share. I look in more detail at the role of sounds in suggesting play themes and actions in Chapter 4.

2.3 Evidence on Transitions Between States of Engagement Obviously transitions are not always made neatly between neighbouring levels of engagement, but it’s possible to look empirically at interaction sequences to map the frequency of transitions between different play states. This provides clues about how technology design supports movement towards more collaborative engagement. Robinson et al. (2003) provided a very effective method of assessing this question by filming pre-schoolers’ play, coding play states and then computing the probability of each play state being followed by any other state. We used a similar method, and employed contingency analysis, to analyse children’s play in our AKC data (Yuill et al., 2014: note the transition data below is not reported in that paper). Figure 2.2 shows the transitional probability of children in one state moving to another state, for the augmented and non-augmented toy, separately for boys and for girls. The patterns for the AKC were very similar across children: there was a strong two-way

2

AKC boys

KC boys

ENGAGEMENT AND JOINT ATTENTION

29

AKC girls

KC girls

Fig. 2.2 Likelihood of moving between play states in AKC (upper panel) vs KC (lower panel) for boys and girls (Note Percentages show the likelihood of moving from one state to another: arrows are thicker where this exceeds 15%. Likelihoods below 5% are not shown)

flow between parallel and cooperative play, with parallel play leading to cooperative play around 25% of the time. The KC patterns were strikingly different: the link between parallel and cooperative play was only around half as common (11–14%) in the unaugmented set: there, parallel play was linked in a cycle with solitary play (20–26% of cases). For the girls in the KC condition, solitary play did sometimes lead to parallel play, but for the boys, solitary play more commonly led to being an onlooker.

30

N. YUILL

Obviously these are small numbers of participants, and there will be individual likes and dislikes affecting children’s play, but this sort of analysis helps in thinking about what sorts of factors might be affecting engagement. The most striking transitions involve shared attention: in the AKC, solitary play readily shifted to parallel play (i.e. with the addition of shared attention), in contrast to the KC, where parallel play often slipped children back into solitary play, or solitary play led to onlooker play, hence providing evidence of engagement but not shared attention. To unpick this, we undertook a fine analysis of the video to see how shared attention was achieved. We did so by coding a small section of the play for each occasion when any child made a bid for the attention of others in the group. This could be holding up a toy and saying ‘Look at this!’, demonstrating a sound, or any other action aimed at gaining the attention of other players. We coded whether these bids were successful or unsuccessful in getting others’ attention. This analysis showed that children had a slightly higher chance of succeeding in their bid when the augmented castle was involved, rather than the ordinary playset. This might seem obvious: if a toy makes a noise, then other children will look. But this was not always what happened: often children held a toy up high to show other children, and for the technology we were using (limited-range RFID antennae in the base of the set), the distance made it unlikely that the sound would be played. It looks as if the better attentiongetting involved the added potential for noises rather than the noises themselves. We think that this small boost in gaining others’ attention helps to account for why children using the AKC could move more easily into cooperative play (see Chapter 4, and Yuill et al., 2014). How a technology supports children’s attempts to get attention is therefore a crucial consideration. The final two sections of this chapter consider first, ways that technology can sometimes impede engagement with, and attention to, others, and second, how technology influences our general awareness of others in the group. 2.3.1

Disengagement and Technoference

Looking for the converse of factors that encourage engagement involves understanding what leads children to disengage from collaborative uses of technology. Everyday technology in particular, because it is so often designed to be ‘personal’, has a particular, sometimes irritating, tendency to disrupt joint engagement, causing ‘technoference’. Our normative

2

ENGAGEMENT AND JOINT ATTENTION

31

responses to technology interruption tend to develop rapidly as new technological affordances appear. For example, when people are engaged in a joint activity, such as a conversation, but also have personal devices, such as a mobile phone or smart watch, what should happen when they receive a call or text? This question becomes very public when the interruption is accompanied by sound, which everyone can hear. Because these devices are personal, the situation is different from a shared, non-personal interruption, such as a fire alarm or doorbell ringing. There is clear evidence that an interrupting personal phone call disrupts interaction, and specifically, learning: Reed et al. (2017) studied what happened when parents aimed to teach their child a word, and were interrupted by a phone call. Comparing 30 seconds of teaching, interrupted by a 30-seconds phone call, then another 30 seconds of teaching, with a continuous 60-seconds teaching session, the conclusion was clear: the child did not learn the word when the parent was interrupted by a call but did so if there was no interruption. The authors explain that the interruption broke the contingent responsiveness in the interaction: momentary disengagement by the mother was enough to break the shared engagement. More generally, personal digital devices can get in the way of collaborative interaction, for example in face-to-face meetings where people are looking down at phone screens, or in online meetings where the ready availability of your own computer might tempt you to check email while others are talking. This is an embodied effect: in the shared reading study mentioned in Chapter 1, small screens on individually-owned devices tended to involve children taking a ‘vulture’ posture that reduces awareness of what is going on around you, impairs others’ ability to see what you are attending to and makes it harder to catch someone’s eye to confirm shared awareness. If two or more people try to work together on a single device, this can provide shared awareness, but often means that only one person has control over the device, given that these devices are generally not designed for multiple users. This contrasts with the shared awareness readily supported with a multi-touch tabletop (see Fig. 2.3, and Chapter 3).

32

N. YUILL

Fig. 2.3 Shared awareness on a multi-touch tabletop (Source http://www.sus sex.ac.uk/psychology/ chatlab/)

2.4

Awareness

Awareness is a widely used concept in the HCI literature, referring to the ways in which people acting together have ongoing awareness of the actions, plans, feelings and other internal states of others, as well as of what people do to make visible their intentions and actions. It is a more general concept than the specific, focused act of seeking and gaining a moment of joint attention with another, in the way discussed in the AKC example. Small actions, movements and states are part of the micro-level flow of action that supports shared understanding. Very influential work by Heath and Luff (1992) cites the importance of ‘situational awareness’. A clear example of this is the different ways that a car driver converses with a passenger in their car, compared with talking to someone on the other end of a phone: conversation is much smoother when both are in the same physical setting, with shared awareness of factors such as traffic conditions and distractions affecting the driver (Haigney & Westerman, 2001). Talking with a passenger in the car seems to distract the driver less than conversing with someone remotely, who does not have access to shared information about the environment. The different means through which we control our actions through technology makes a real difference to others’ awareness of our actions and intentions. Touchscreens and augmented objects are widely available, and usually provide accurate cues about what other people are doing, and intending, in ways very similar to any other embodied interaction,

2

ENGAGEMENT AND JOINT ATTENTION

33

such as a game of catch or a shared picture-sorting activity. This contrasts with indirect input devices such as a pen or mouse, where the cursor (controlled from a separate mat area) replaces the embodied agent with a small moving marker on the screen. Pinelle et al. (2008) found that adults showed more awareness on a tabletop task with direct touch than with various indirect methods of control, although they preferred the greater reach they had with indirect objects—that is, the table was too large for them to reach some parts using their arms. However, their adult participants were seated: in most published tabletop studies, judging from photos, children stand round the table and can be very mobile, happy to stretch and move, even while generally keeping to their ‘side’ of the table. Figure 2.4 shows all the touches for each child in a typical group of three, standing round three sides of a multi-touch table, completing a classroom design task. Each child focuses mostly on the area closest to them, but also ranges into other areas, with plenty of overlap. In Chapter 3, I will consider in more detail what happens with control of these overlapping spaces. For awareness in particular, it’s important to consider the way that bodily movements form an important part of our understanding of what others are doing: collaborative activity is embodied in movement, and free movement helps awareness. In our studies of groups of three working together on large multi-touch surfaces, we found many ways that children act to support awareness, in addition to their explicit attempts to create shared attention. They often thought aloud, made running commentaries on their own actions, anticipated hand collisions by adjusting their position, and sometimes took more forcible action such as elbowing someone out of the way (Fleck et al., 2009). For example, in the classroom design

Fig. 2.4 Patterns of touch by children positioned, respectively, on the left, centre and right side of a multi-touch table (Source Rick et al. [2009], https:// dl.acm.org/doi/pdf/10.1145/1551788.1551807)

34

N. YUILL

task shown above, note how these children, Amy, Billie and Clare, act out their suggestions while explaining them in words, to reach real consensus on their approach: Billie and Clare start undoing Amy’s placement of desks and pupils so as to propose their own ideas, hence causing Amy to block Clare’s moves: Clare: “Why don’t you put that on there so this one can go there…?” [C moves one of the people on A’s desk to a different place at the desk]. Amy: “No because it needs to go…” [Amy realises her placement has been altered, and returns the figure Clare had moved out of the way and replaces a piece Billie had moved]. Clare: “…with his friends and then this one can go with his friends.” Amy: “Yeah, wait a sec.” [Billie is trying to move the desk away again, but Amy prevents this by holding her finger down on it].

As Amy prevents Billie from physically moving the desk icon, Billie has to explain verbally why she had wanted to move it, leading Amy to understand and concede: Billie: “No, but that doesn’t go there because then they will be…won’t they? So…” [Billie begins to explain why she’s trying to alter Amy’s desk placement]. Amy: “But there needs to be chatties.” [Amy puts forward her own point of view]. Billie: “Yeah. Right I think we shouldn’t have all the chatty ones together because then it’s just…” [Billie is pushed to articulate clearly her rationale]. Amy: “Oh yeah! Because they’re all, they will chat!” [Amy finally sees Billie’s intention, and allows the alteration of the desk position to stand.] Billie: “Yeah then they could all chat so…”. Clare: “So don’t put them with their friends!” [a shared principle for the task is agreed by all three].

Being able to move digital objects in visible ways provides a means of thinking through the consequences of actions, by demonstrating concretely for the others in the group the advantages or problems with a suggestion. Having a shared space in which to do this supports the articulation and demonstration of different viewpoints. Large, shared surfaces with direct touch can thus provide high levels of awareness, as all participants can act together and be aware, through

2

ENGAGEMENT AND JOINT ATTENTION

35

sound, vision and body positioning, of what others are up to. Children will stop what they are doing and watch intently what their peers are doing, to understand their intentions: particularly valuable to supplement limited verbal skills. The size constraints of tablets and phone screens make this awareness of joint working less likely (though joint viewing is possible) and these devices have not been built with the potential for the system to detect who is doing the touching: the implications of this restriction for sharing control are addressed more fully in Chapter 3. Large, multi-touch tables unfortunately did not catch on in schools, though sometimes appear in public displays and museums, but they have real advantages for engendering shared awareness. More widely available technologies can still be powerful tools to support awareness. Digital tangibles are increasingly becoming available, and their power is in their potential to show what users are thinking, thus externalising ideas. Even simple non-digital objects such as counters for learning early maths skills allow a teacher to see whether a child understands adding up, and more sophisticated blocks with embedded technology provide further ways to support collaboration. For example, Manches (2013) developed Digicubes (a screen-based app) and Numbuko (digitally-augmented physical rods), based on traditional Cuisenaire rods used for early maths learning in schools. The rods represent different lengths with different colours, for supporting children to find different ways of combining them to make totals. In these augmented versions, the rods can be broken apart and recombined, and change colour accordingly. The way people use their bodies and movements also helps others to understand their plans and thinking. For example, people can work shoulder-to-shoulder, leaning over to share information via a small device, or can tilt a device towards someone, or gesture towards it, to share attention. Larger surfaces can help, but bigger is not always better: Zagermann et al. (2016) compared pairs of adults each having a tablet and working together with information on a horizontal shared display (a tabletop), in three different sizes. With the largest tabletop (55-inch diagonal), pairs only paid attention to each other very rarely, as they became focused on the tabletop. For a medium-size display (27-inch diagonal), the balance between attention to the other person and to the tabletop was more equal. As the authors point out, different balances prioritise different things. Favouring attention to a partner can be useful to build a sense of a team and to establish trust, while focusing more on the display is

36

N. YUILL

useful to support shared understanding, enabling talking while demonstrating, pointing and manipulating objects. Larger shared areas seemed to support more playful interactions and serendipitous discoveries. Even so, these adult users adapted their behaviour to the tools available, and there were no significant differences in quality of outcome between the different set-ups. For children especially, collaborating with surfaces, we need to bear in mind the importance of process, not just outcome: working together helps not only for reaching a good solution for a task, but also for developing the behaviours that make up good collaboration. Shared spaces help make actions and plans more apparent to others, unlike, say, the click of a mouse, and provide different ways of getting involved. Augmented objects and tangibles generally go even further in giving external indicators of how others are thinking.

References Bakeman, R., & Brownlee, J. R. (1980). The strategic use of parallel play: A sequential analysis. Child Development, 51, 873–878. Barbu, S., Cabanes, G., & Le Maner-Idrissi, G. (2011). Boys and girls on the playground: Sex differences in social development are not stable across early childhood. PloS One, 6(1), e16407. Blake, P. R., & Harris, P. L. (2009). Children’s understanding of ownership transfers. Cognitive Development, 24(2), 133–145. Coultas, J. C., & van Leeuwen, E. J. C. (2015). Conformity: Definitions, types, and evolutionary grounding. In Evolutionary perspectives on social psychology (pp. 189–202). Springer. De Jong, M. T., & Bus, A. G. (2002). Quality of book-reading matters for emergent readers: An experiment with the same book in a regular or electronic format. Journal of Educational Psychology, 94(1), 145. Fleck, R., Rogers, Y., Yuill, N., Marshall, P., Carr, A., Rick, J., & Bonnett, V. (2009). Actions speak loudly with words: Unpacking collaboration around the table. In Proceedings of ITS 2009—The ACM International Conference on Interactive Tabletops and Surfaces (pp. 189–196). Haigney, D., & Westerman, S. J. (2001). Mobile (cellular) phone use and driving: A critical review of research methodology. Ergonomics, 44(2), 132– 143. Heath, C., & Luff, P. (1992). Collaboration and control. Computer Supported Cooperative Work 1 (pp. 65–80). Kluwer.

2

ENGAGEMENT AND JOINT ATTENTION

37

Hinske, S., Lampe, M., Yuill, N., Price, S., & Langheinrich, M. (2009). Kingdom of the knights: Evaluation of a seamlessly augmented toy environment for playful learning. In Proceedings of the 8th International Conference on Interaction Design and Children (pp. 202–205). ACM. Hornecker, E., Marshall, P., & Rogers, Y. (2007). From entry to access: How shareability comes about. In Proceedings of the 2007 Conference on Designing Pleasurable Products and Interfaces (pp. 328–342). ACM. Huh, M., & Friedman, O. (2017). Young children’s understanding of the limits and benefits of group ownership. Developmental Psychology, 53(4), 686–697. Manches, A. (2013). Emerging technologies for young children. Handbook of Design in Educational Technology (pp. 425–438). Mora-Guiard, J., Crowell, C., Pares, N., & Heaton, P. (2017). Sparking social initiation behaviors in children with Autism through full-body Interaction. International Journal of Child-Computer Interaction, 11, 62–71. Parten, M. B. (1932). Social participation among pre-school children. Journal of Abnormal and Social Psychology, 27 (3), 243–269. Pinelle, D., Nacenta, M., Gutwin, C., & Stach, T. (2008). The effects of copresent embodiments on awareness and collaboration in tabletop groupware. In Proceedings of Graphics Interface 2008 (pp. 1–8). Reed, J., Hirsh-Pasek, K., & Golinkoff, R. M. (2017). Learning on hold: Cell phones sidetrack parent–child interactions. Developmental Psychology, 53(8), 1428–1436. Rick, J., Harris, A., Marshall, P., Fleck, R., Yuill, N., & Rogers, Y. (2009). Children designing together on a multi-touch tabletop: An analysis of spatial orientation and user interactions. In Proceedings of the 8th International Conference on Interaction Design and Children (pp. 106–114). ACM. Robinson, C. C., Anderson, G. T., Porter, C. L., Hart, C. H., & Wouden-Miller, M. (2003). Sequential transition patterns of preschoolers’ social interactions during child-initiated play: Is parallel-aware play a bidirectional bridge to other play states? Early Childhood Research Quarterly, 18(1), 3–21. Siegal, M., & Storey, R. M. (1985). Day care and children’s conceptions of moral and social rules. Child Development, 56, 1001–1008. Yuill, N., Hinske, S., Williams, S. E., & Leith, G. (2014). How getting noticed helps getting on: Successful attention capture doubles children’s cooperative play. Frontiers in Psychology, 5, 1–10. Yuill, N., Rogers, Y., & Rick, J. (2013). Pass the iPad: Collaborative creating and sharing in family groups. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 941–950). ACM. Zagermann, J., Pfeil, U., Rädle, R., Jetter, H.-C., Klokmose, C., & Reiterer, H. (2016). When tablets meet tabletops: The effect of tabletop size on aroundthe-table collaboration with personal tablets. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5470–5481).

CHAPTER 3

Contingency and Control

Abstract Control influences how collaboration around technology works, through its means (e.g. mouse, touch) and distribution (simultaneous or sequential, shared or single). Single control can produce dominance and disengagement, but shared control needs support for awareness. Awareness, such as through gesture, bodily movement and talking, affects control. Collaboration requires actions controlled by one individual to be contingent on the actions of others, so as to create a connected flow of joint action, rather than a series of unconnected moves. Contingent behaviour can be supported through constraints, for example the SCoSS paradigm. Keywords Control (single or shared) · Contingency · Joint action · Constraints

In Chapter 2, we looked at how children become engaged in interaction and how attentional engagement works, with particular emphasis on how technology can be designed to support closer engagement and awareness. This chapter looks at the to-and-fro of interaction at the heart of collaboration: how are the actions and speech of each individual participant coordinated with those of others, and how can technology be designed to support shared control for more collaborative working? © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3_3

39

40

N. YUILL

3.1

Contingency and Control: Introduction

Control means ways in which children can create an effect in a technology environment. In hierarchical control, the leader’s orders directly regulate the actions of the subordinates, who, if obedient, don’t have much choice about how their actions are contingent: they just need to obey. Distributed control, where each person takes on a sub-task, fits the definition of cooperation, but is not collaboration. Collaboration in our sense therefore requires sharing control. That means each user needs to have some way of controlling actions, and control, like collaboration, is an ongoing process: it can involve periods of conflict, as group members with varying skills negotiate how to work together. When control is shared, group members have to find ways to coordinate their actions with each other, and this is often not as straightforward as in a more cooperative sub-task model of working. This is what I discuss as contingency: how one person’s action links to others’ actions, as happens in a connected conversation or a sequence of interleaved behaviours. Contingent action through shared control is a crucial part of the model of collaboration described in Chapter 1. The image of the cogwheels (Fig. 1.3) makes clear how very closely meshed these actions often need to be. Any group behaviour involves some contingency: as we saw in Chapter 2, parallel engagement involves contingency at a fairly minimal level: there might be some imitation, but no close connection between different people’s actions. Cooperation involves contingency at some point, because each sub-task needs to be carried out in a way that enables sub-tasks to be combined later on (if constructing a railway line starting from opposite ends, we hope that when we meet in the middle, we have used the same gauge). Collaboration involves closer contingency that can happen in many different ways. Imitation, in typical development at least, is a powerful factor that often comes free in social interaction, in that children will readily watch and copy what they see others doing: technology that enables clear visibility of others’ actions supports this. Turn-taking is another simple contingent structure that provides an obvious external criterion helping children know when to act and when to restrain action. It is rightly valued as an important skill for working with others, but can sometimes risk children sticking rigidly to turns and losing interest while others take their turn, so that a sense of contingency is reduced. This is where awareness and engagement need to be maintained, with

3

CONTINGENCY AND CONTROL

41

all children having a sense of being part of others’ actions. Having clear complementary roles is another common form of contingency that can support collaboration: this is the main feature of LEGO® therapy (Legoff & Sherman, 2006), commonly used to support social participation in autistic children, where children are invited to take corresponding roles, such as architect (who has the instructions) and builder (who constructs the blocks under guidance of the architect). Of course these roles might also be conflicting, such as when one person undoes the action of another. That is still contingent, and there is ample literature on the role of conflict and disagreement in developing children’s understanding (Druyan, 2001). However, it is important that children are expected to reach consensus at some point (Tenenbaum et al., 2020). Most generally, contingency happens when one person’s actions (including speech) depend on previous actions of another person. Clearly, the means through which children can control events through technology will influence how closely contingent their actions are: for example, if there’s only one means of direct control for a device (such as a single mouse), groups need to negotiate how each person can contribute, and there is a danger of other participants becoming passive and tuning out, rather than all being actively engaged. This sometimes occurred when desktop computers first came into classrooms, where two children often shared a device designed with single-use in mind, and took separate roles on keyboard and mouse: ‘I’m the thinkist, you’re the typist’ (Sheingold et al., 1984). Conversely, if there are many means of control, there is a danger of losing contingency, potentially resulting in chaos. Multi-touch tables, featuring strongly in this chapter, provide a very effective way of examining what supports shared control, through tweaking their design and looking at the effects on patterns of collaborative behaviour. That will bring us to questions about equity: do participants have an equal voice, does each person have agency, how is shared control managed and how is contingency achieved? The chapter has three main sections. First, the studies with large surfaces address questions about managing control, especially single and shared. Then, looking more broadly at other technologies, I ask whether it’s better to design for collaboration to be enabled, encouraged or enforced, and how the available means of control affect this. This is followed by a look at details of contingent actions, and a description of the SCoSS software design we developed to link contingency and control.

42

N. YUILL

3.2 Collaboration on Tabletops: Single Versus Shared Control Many of the screens we use are designed for ‘personal’ use and single control, so aren’t well suited to sharing. Sometimes this is simply a matter of size: sharing a small screen restricts awareness, and makes sharing control difficult, unless the task has clearly defined roles such as in shared reading. Even with larger surfaces, though, there are big differences depending on who has control and how it is shared. ‘Single display groupware’ such as large multi-touch table surfaces became readily available, at least for research purposes, early this century. These systems have failed to become widespread in schools, or indeed, anywhere else (Brudy & Marquardt, n.d.), but the research with such devices brought lasting insights to questions about how to support collaborative interaction when, potentially, everyone has a finger in the pie. Multi-touch tables are large, horizontal, touch surfaces that allow all users to act simultaneously on digital objects displayed. Most commercially available tables tend not to have a user-identity feature, so the ability to constrain control is much reduced. The table we used in our studies on the ShareIT project (http://shareitproject.org/) was a Mitsubishi DiamondTouch, later distributed through CircleTwelve, with a diagonal size of 80 cm. This surface works using conductive mats that each child stands on, which enables us to track and log which child moves what digital object and when. The table can be set to allow only one user to act at a time (single-touch), or all users to move objects at once (multi-touch). Our aim on the child-focused part of the project was to develop and study tasks that would support children to work collaboratively, by ensuring that all children were engaged, attending to the actions of others, acting contingently and together negotiating shared understandings of how to solve the task. This would generally involve debate, conflict and negotiation: the important thing was to see how children’s verbal and non-verbal behaviour with varying technology setups supported the process of reaching shared understanding. This process needs to involve all group members as a way of ensuring the growth of each child’s individual grasp of the task, following the Vygotskian notion of development working from inter-psychological to intra-psychological. Our studies mostly involved primary-age children, generally in groups of three, in school settings or at events such as science festivals.

3

CONTINGENCY AND CONTROL

43

Consider a group of three children working together to design how their classroom should be organised, as in the OurSpace task we designed (Fig. 3.1). Doing this task on a large, shared surface means that each child can quite easily detect the others’ actions and intentions. These indicators might involve hovering the hand over an object while reflecting on a possible move, rearranging the position of the desks or moving particular pupils to different desks. We used a design of the children’s own classroom with the same number of desks and pupils. We did not have named pupils from the class (to avoid the possibility of personal comments about individuals), but each of the pupil icons had particular features that might affect where they could sit: different colours, representing that they were in a particular friendship group, a speech bubble indicating a particularly talkative person, and some people wearing glasses. We did not dictate how this information should be used, but provided it to enable children to give reasons for their choices: for example, “He’s really chatty so I’m putting him at the front for the teacher to keep an eye on”, “She’s wearing glasses so she can sit at the front near the whiteboard”, “I’m

Fig. 3.1 OurSpace tabletop design (Note Pupil desks are white, teacher desk is central here, pupil friendship groups are shown by colour. The three children stand on mats by each short side and one long side of the multi-touch table. Source Rick et al. [2009], https://dl.acm.org/doi/pdf/10.1145/1551788.155 1807)

44

N. YUILL

putting them together because they are friends” (or indeed, separating them for the same reason). We filmed 15 triads of children, either Year 3 (aged 7–8) or Year 4 (aged 8–9), spending two half-hour sessions planning their classroom spaces. In one session, the table worked as a single-touch device: only one person could move an object at any one moment. In the other session, we set the table to be multi-touch: this means that all three children could move objects simultaneously. We measured each child’s level of participation, both verbal (by transcribing all speech) and physical (using the program’s system logs of who moved what) and we coded what the children talked about. From what we already know about collaboration, we can expect that levels of awareness and control will differ in the two conditions. In singletouch, it’s easy for children to be aware of what is happening, because only one move can be made at a time. However, control has to be negotiated: the first person to put their hand on the table has control, and it is up to the group to work out how control is negotiated equitably, for example by establishing some system of turn-taking. In multi-touch, there is no problem of control, in that everyone can work simultaneously. However, this can make awareness difficult: moving objects while trying to keep an eye on what your partners are up to can be tricky, and it’s easy for one person to undo what another has done without being noticed. Watching the ‘clash of arms’ around a multi-touch table provides a highly visible indicator of your partners’ intentions (Marshall et al., 2009). Children can be quite robust in their efforts to gain control of digital objects on a touch surface when they feel this is needed: we saw children move others’ hands out of the way, shield areas of the table to prevent access and use body positioning to control space. For a non-digital cardboard version of the same task, children found it was sufficient to move the objects out of reach or close a hand round them. In our studies, we found that the apps we designed for the multi-touch table generally supported even participation as they were intended to: we could measure this by computing a Gini coefficient, a figure ranging from 0 to 1, with lower scores showing more even participation. None of our groups’ scores in OurSpace, for verbal or physical equity, was over .21, showing that generally speaking, the groups worked with high levels of equity. Similar levels of equity were found with triads of adults using a tabletop for a garden design task (Rogers et al., 2009), and in that study, equity was significantly lower when working with a laptop and single

3

CONTINGENCY AND CONTROL

45

mouse, producing Gini coefficients as high as .48: in three of their five groups, only a single person carried out all the actions. We know from a study by Wallace et al. (2013) that more equity tends to predict better performance on collaborative tabletop tasks, measured in their study by the number of key facts each group mentioned in discussion and the number of new insights they co-created. In a further condition of the garden design study by Rogers et al., with groups of four adults using one tablet each, equity of participation did not predict success: these groups apparently tried to fix the difficulty of lacking shared awareness by passing one tablet round the group, or by trying to squeeze all the task information onto a single tablet to share—a strategy that was not very successful. Equity was low in this condition, around .40. This suggests that equity is tied up with shared control and awareness: just enforcing equity in some way without considering each of these other factors in collaboration seems unlikely to succeed. In the OurSpace study, we deliberately challenged children by using groups of three: in our experience, this made it harder to coordinate than in a pair, and possible for one child to be left out. Our analysis of children’s conversations shows how children managed this challenge. This is the one tabletop study we did where we investigated differences in collaborative experience (as indexed by age), sensing that there would be real differences between Year 3 and Year 4 (7–8 and 8–9 years of age respectively). We found an intriguing interaction of factors. Just for the single-touch condition, the younger children tended to show less verbal equity than older children: this did not happen in the multi-touch condition (Harris et al., 2009). The single-touch condition raised a real challenge for these young children of managing control: they spent a large amount of time talking (and sometimes arguing) about how to manage turn-taking. In contrast, the multi-touch condition supported children to spend more time discussing the task itself. Here’s a flavour of one of the more heated Year 3 discussions in single-touch, between Amy, Beth and Carol: Amy: Yeah like that Beth. Carol: Amy get your finger off the board! Beth: It was there and you put your finger like there. Carol: Beth get off! Get off! Beth you already had so many turns. Beth: Last time you did it. Carol: I think you should let Amy have a go. Beth: Last time you did most of it. Carol: No, not lots of it. Beth: But you did though!

46

N. YUILL

Older age groups tended to be better able to manage turn-taking in single-touch, though they did have to spend time discussing it, as with Dan, Ed and Fahad in Year 4: Ed: Shall I go first or you Dan? Dan: Erm… Ed: Have a vote. Oh Fahad, it’s not your go! Dan: Ok let’s let Fahad go first, like that. We can take it in turns to say ideas then we can do it one at a time.

The challenge of single-touch did mean that groups tended to fall into cooperative working (dividing into sub-tasks), as in the Year 4 group just quoted: Ed: Ok, I’ll do the people. Somebody does the tables, I’ll do the people and somebody turns. Dan: I’m doing the tables. Ed: Ok do you wanna turn, no, no he’s doing the tables, Fahad is doing the tables (moves Dan’s hand away). Dan: Ok I’ll do the people.

Managing control by enabling only one-at-a-time is a common strategy, even with the simplest of technology such as a ‘talking stick’: the person holding the stick is the only one entitled to speak. This can be frustrating if someone hangs onto the power too long: occasionally we had children who discovered that surreptitiously keeping a finger on the single-touch surface could extend their turn while their baffled peers tried to work out why they couldn’t move any objects. Single control can lead to turntaking and cooperative working, and sometimes just tuning out while waiting for a turn. That can be reduced if those waiting their turn can be supported to watch carefully and to be interested in what their partner is doing, or in multi-touch, if action is slowed down to support awareness. One experimental way we tried to support this was to add a tracking feature, so that children’s movement of any object left a coloured trail for a few moments afterwards. The Year 4 group quoted above showed a markedly more interwoven collaborative working pattern in their multi-touch condition: Fahad: OK now shall we put the people on? Ed: Yeah the chatty people at the back.

3

CONTINGENCY AND CONTROL

47

Fahad: OK. Dan: But there’s one with glasses and that’s chatty! Ed: Where? Dan: There and there. Fahad: Then just put them at the front. Ed: Well then put them still near the front because it’s hard to see. Dan: And they’re also friends. Ed: Look, no, oh yeah chatty people go on the back with their…no wouldn’t they need to go on the front so the teacher can see them?

This exchange seems a good illustration of what Roschelle and Teasley mean in their definition of collaboration by the continued negotiation of shared meaning. So shared control generally worked better for collaboration, but it’s not a simple panacea: there are many different ways of sharing control, and children of different ages, experiences and capacities will need different support. Planning an effective collaborative set-up involves thinking about the possibilities and constraints for control (source, number, visibility of effects) afforded by a particular technology configuration, in relation to the capacities of the intended users (a group of friends? a mix of ages?). Clues as to whether this is working come by observing the results in terms of levels of participation, degrees of equity, sharing of attention and the extent to which groups negotiate shared meaning. The next section looks at different ways of supporting shared control.

3.3 Managing Control in Collaborative Tasks: Enabling, Encouraging or Enforcing A useful way of conceiving the different ways of managing collaboration through supporting shared control is to compare technology as enabling, encouraging and enforcing (Benford et al., 2000). An example of enforcing collaboration is what happens with a ‘talking stick’ model, where there is just one means of control, as might happen in single-touch or with a single mouse on a screen. It seems to matter how such rules are enforced, though. Piper et al. (2006) constructed a game played on a multi-touch tabletop, where turn-taking was controlled either by the teacher telling students when it was their turn, or by the game mechanism itself disabling touch for all players except the one whose turn it was. Piper et al. suggest that the technology-mediated turn-taking was better

48

N. YUILL

received by the students, presumably because it was a more impersonal means of control with less scope for debate. Enabling collaboration can be accomplished by providing multiple means of acting: having multiple mice, multi-touch (as in most of the touchscreens people use in daily life) or multiple tangible objects, as in the Augmented Knights’ Castle described in Chapter 1. Benford et al. used such a model to design KidPad, a shared drawing app that could involve several mice. Children could take a mouse and use any of the shared drawing tools that were available, to draw in the same shared space. This enabled children to work together, although they were also free to work alone, just drawing something in an area of the screen close to their position, in a solitary or parallel way. However, just enabling collaboration is no guarantee that it will take place. Technology to encourage collaboration might seem to be a better answer, and there are different ways this might be achieved. Benford et al. provided new features for KidPad that could only be accessed when children worked together: for example, two children drawing with different colours close together in space would yield a new colour between them. It’s a powerful idea, to enable the creation of something when acting together that can’t be achieved alone, and it mirrors what can happen in the most constructive forms of peer learning. Many tasks encourage collaboration by being too difficult to solve alone, and by providing separate control in ways where one person’s actions necessarily support or constrain another’s. We used such an approach in DigiTile, an app for multi-touch tables that gave pairs of children a palette of different-coloured tiles, one on each side of the table, and a central shared grid in which they had to place tiles to make a shape with specified fractions of colour (Rick et al., 2009). Figure 3.2 shows two children working on a half-black, half-red grid. Each child can move tiles as they wish from their palette into the shared area. Importantly, the effects of each child’s actions towards the goal are constantly updated in a display showing how much of the grid is filled with each colour. That means it’s clear if there is too much, or not enough, of a particular colour, and it’s apparent very quickly that the pair need to coordinate their actions, much of this being done by having to talk through what is needed. The display was particularly useful in allowing children to see the results expressed as fractions, percentages or in a visual pie chart, as a resource for understanding the relation between different representations of amount. We intended that manipulating the pieces to create

3

CONTINGENCY AND CONTROL

49

Fig. 3.2 Two children completing the DigiTile fraction challenge on a multitouch table (Source Rick et al. [2009], http://oro.open.ac.uk/19511/1/dtcscl 2009.pdf)

set proportions would help children understand how different fractions were equivalent (e.g. four-eighths equals a half), the different operations of numerators and denominators and the independence of shape and amount. We compared seven pairs of 9–11-year-old children who had a halfhour session with DigiTile against five children taking the ordinary classroom lessons, using a pre- and post-test of fraction understanding. The DigiTile group showed significantly higher learning gains than the non-participating children. We found similar learning gains in a very small study with children in a special school, with no difference in gains between children using a tabletop or a desktop computer version, although the children using the tabletop tended to stay more on task and seemed more collaborative (Aytac & Yuill, 2009). Although our main study was also small scale, by looking qualitatively at the children’s conversations we can get a flavour of the sort of collaborative mechanisms that seem to support learning. It appeared that pairs who improved their understanding more (greater gains on the post-test) were likely to spend less time on designating roles and more time offering suggestions and support, as in this example of a pair trying to create three-eighths:

50

N. YUILL

Yusuf : Shall I do ‘em like that? Ailsa: Yeah. How many we got? Nine thirty-twos. And if I put that there. I’ve got ten thirty twos….Shall we take it out because it will be more? Yusuf : Hang on, three! Ailsa: Oooh! Yusuf : So do another half! Yeey! Ailsa: Three-eights!

We suggested that the requirement to work together—engaging in dialogue about the task, using the fraction display and trying out the effects of different actions—all helped children’s understanding. The crucial role of gesturing to objects on the table is very apparent in this excited explanation, which was accompanied by much waving of arms, indicating shapes on the display, close shared attention and accompanying actions by both partners: Ethan: Coz I know what to do. See that there, that there and that there. And then we got that. That’s two-fifths. No! Ah! I know how to do it. Half, half, half, half, half. Then half on the other side. That’s one-tenth! We’ve done it! That is one-tenth! We’ve done the red one! Wait, then blue is just that one then that one. Then do it down there. I think we’ve figured it out. Yeah. And then that goes down there and then that goes down there. Two-tenths! Yes!

In summary of this work, single control can start with orderly action, and, if children manage this, they generate reasonably equitable interactions by establishing rules for turn-taking (but conflict when they can’t manage this). However, turn-taking can lead to a more parallel form of working. Shared control therefore offers the opportunity for collaboration, as everyone can be engaged at once, but this comes with other requirements: each person needs to be aware of what other people are doing, and actions need to be contingent on each other, otherwise groups can use a ‘divide and rule’ strategy that produces cooperative working, reducing the possibilities of negotiating and developing new shared understandings. This brings us to the topic of contingency, and how it needs to work alongside control.

3

CONTINGENCY AND CONTROL

51

3.4 Contingency Underlies Control: The Example of SCoSS Two children on a ‘hunt the Snark’ exercise enter a ‘flying space’, armed with a large key that opens a box. The box contains two ‘magic flying jackets’, coats with wearable technology of sensors detecting acceleration movements stitched into the lining of the arms, linked to a wireless network and a large screen displaying glimpses of the mysterious creature. The animation changes contingent on the children’s movements. Little happens if the children are still, but if one child waves and banks their arms in one direction, the Snark ‘wakes up’, showing colour and movement. If children act together, and both jackets swerve the same way, the Snark comes close and laughs, while children flapping together in synchrony has the Snark soaring, swinging and gliding (Rogers et al., 2002). For shared control to work well, there need to be clear pathways for one child’s action to be contingent on the actions of others and for the contingency to be clearly demonstrated, as in the way the Snark moves depending on how the children move. It’s clear from the work with tabletops that managing control is crucial for technology to support collaboration. Limiting control to one person can risk them dominating, while others may tune out, and can lead to extended debate on how to manage turn-taking, sometimes ending in splitting work into sub-tasks for children to work on in parallel. That is fine if the aim is cooperation, but not if it’s collaboration we seek—and the next chapter on shared understanding explains more about the benefits collaboration brings over cooperation. Fortunately, seeing and acting contingently seems to be firmly wired into us. Schilbach and colleagues (2013) write articulately about a ‘second-person neuroscience’: the idea that ‘minds are made for sharing’. When we interact with another person, there are not two independent processing units cogitating by themselves: instead, each agent is involved in the actions of the other person, using similar capacities to those that we draw on when we are monitoring how our own actions are progressing. For example, when asked to lift an index finger, people act more promptly when watching someone else moving the same finger, and more slowly when seeing the other move the middle finger: the other’s different action interferes in the partner’s actions, as well as the actor’s, in a similar way to the interference a person acting alone shows. It seems that we enter

52

N. YUILL

into others’ actions imaginatively: some have described this as ‘shared task representations’, in that we represent not just our own part in the task, but our partner’s, as well. These effects are very easily created. Similar effects occur just when we are told that we are performing a task with a partner, or when the partner is in close proximity (Knoblich et al., 2011). These effects may be even more prevalent in younger children, as a fascinating literature on the ‘I did it’ bias suggests (Foley & Ratner, 1996; Sommerville & Hammond, 2007). During collaborative tasks, children mistakenly recall that they performed an action that was actually completed by their partner, possibly because their close collaboration has them enter so fully into what their partner is doing. We need only the tiniest indication of action dependency to perceive contingency: this is shown in the fascinating perceptual crossing task studied by Froese and Di Paolo (2010). You sit alone in a booth and are asked to move a point along a line. The experimenter asks you to note when you think there is another person there (represented by a small bump to the hand holding the controller). All that’s needed for you to experience that as an agent is for there to be the most minimal contingency in the timings of each actor’s movements. Even when people just observe contingent movements of three animated geometrical shapes filmed moving round a rectangle, they tend to attribute whole narratives of jealousy, betrayal and reconciliation, based on one object moving in synch just behind another, followed shortly after by a third object moving behind the other two (Oatley & Yuill, 1985), showing just how minimal a contingency needs to be for us to imbue action with meaning. Quite independently of Froese’s work, but in a similar spirit, we developed a software architecture (Separate Control of Shared Space, or SCoSS) designed to support users towards contingent action through supporting control and awareness, and based on a ‘task-sharing framework’ (Pearce et al., 2005). We have seen the difficulties caused in sharing control by allowing everyone to act at the same time, for example with multiple mice (Scott et al., 2000). Sometimes these difficulties are addressed by giving children their own private space (e.g. on a table) within a commonly-shared space, as used by Constantino-Gonzalez et al. (2003). This can easily support cooperation, but not collaboration: children’s work in their private space tends not to be easily visible to others and can’t be influenced by them either. We took a different tack, by giving each child their own space, with individual control over that space, but linking these spaces contingently,

3

CONTINGENCY AND CONTROL

53

in three ways (Fig. 3.3, left panel): first, each of the two grids shows the whole task (sorting successively-appearing pictures from the small grey image box into two categories, to be placed in the left (yellow) or right (purple) panels, so each player has their own identical but independent copy of the task. Second, having the two spaces side by side clearly illustrates where the players agree or disagree in their choices, and this can be further highlighted using coloured outlines marking agreement (green) or disagreement (red). Third, although control is separate, with each player having control over their own space and none over the other’s space, actions can be constrained at specific points. Thus, children may be able to carry on with differing choices up to a certain point, but then their ability to move further is blocked by the software until the two have reached agreement, at which point they can each click on a ‘we agree’ icon to release the next item for sorting. Note that a single person clicking ‘we agree’ won’t move things on. This means that their actions, although controlled separately and individually, are contingent on each other, and this contingency is represented explicitly. Having the visual indicators of where they agree and disagree acts as a resource for discussion. This ties actions together in a way that the independent controllers on the KidPad

Fig. 3.3 Separate Control of Shared Space (SCoSS) compared to single representation (Source Holt and Yuill [2014], https://link.springer.com/content/ pdf/10.1007/s10803-013-1868-x.pdf)

54

N. YUILL

app didn’t: those gave independent control, but did not push for contingency of action or protect from domination. We conceived of this less as ‘enforced’ collaboration, but as providing constraints on action that support working together. The right-hand panel of Fig. 3.3 shows the task with two mice and a single-task representation, where either mouse controls any object. We implemented and tested several versions of SCoSS designs for pairs of children, with dual mice on desktop computers (Holt & Yuill, 2014; Kerawalla et al., 2008; Yuill et al., 2009), shared spaces on a multi-touch tabletop (Holt & Yuill, in preparation) and most recently, using a web app, Chatlab Connect, with two tablets (Holt & Yuill, 2017). All these studies showed advantages for collaboration, in a range of tasks and populations. One study is introduced briefly here, and I will have more to say about how SCoSS supports shared understanding in Chapter 4, and how it can support autistic children with learning disabilities, in Chapter 6. Our work illuminated how SCoSS supported shared understanding by using fine analysis of video. In one study (Holt & Yuill, 2014), typically-developing pre-schoolers (2–4-year-olds) worked in pairs on the picture-sorting task, with either a peer or an adult, and in either a SCoSS condition, with two mice and two representations of the task on a desktop computer, as on the left of Fig. 3.3, or a non-SCoSS condition (right side of Fig. 3.3), where each of the two mice controlled anything on the screen, and there was a single-task representation and single ‘we agree’ button. We filmed children’s interactions and compared behaviour in each condition. In particular, we coded what we termed ‘other-awareness’, which relates directly to the ‘awareness’ and ‘contingency’ features of the Co-EnACT collaboration framework introduced in Chapter 1. Otherawareness was of two types: attentional other-awareness is attending to, watching and waiting for the partner. Active other-awareness is defined by being demonstrably contingent: for example, one child might place a picture on her grid and wait for the other child to do the same before clicking to get the next picture, or might move her picture to match the partner’s placement. This contingency can be clearly identified and agreed on by different people rating the video, by noting child behaviours such as waiting, glancing at the other, nodding, gesturing or using speech. Otherawareness, attentional and active, was significantly higher in the SCoSS

3

CONTINGENCY AND CONTROL

55

condition than in the more typical shared-device, single-task representation condition, with almost four times as many such behaviours with SCoSS. This example provides a clear illustration of how tweaks to technology can support higher levels of collaboration, in this case by increasing the likelihood of contingent action. It’s not necessary to have a specific SCoSS app to provide this support for collaboration, just to grasp the principles that support collaboration when designing shared activities. What is important here is to grasp how the constraints imposed by the design (e.g. needing to agree before moving on, having one person’s actions directly affect the other’s possibility for acting) support collaboration through separate control combined with a push to make each actor’s actions contingent on those of the other. A similar example of contingency is apparent in the Augmented Knights’ Castle described in Chapter 1: there, the successful contingency of getting another player’s attention played a role in increasing the amount of cooperative play, and through cocreation of narratives, shared understanding, which is the subject of the next chapter.

3.5

Control: Voice Assistance

We have focused above on control by touch on surfaces, the most common sort of technology available to people. However, newer forms of technology involve other means of control too. For example, voice activation, as in smart speakers such as Echo with the Alexa app, is a particularly salient example of shared control where having multiple users can create confusion and struggles over control. Smart speakers have tended to be used and researched as individual-use devices for tasks such as reminders, alarms and ‘smart-home’ style control, as well as curation of personal playlists. However, they are also used in shared ways in family homes, given how easy it is just to command by voice. We studied interactions in five families who were given Alexa on Echo for three weeks (Beirl et al., 2019). Collective use was challenging: younger children struggled both with being understood by Alexa and with remembering to structure their utterances correctly (with the ‘wake’ word spoken first). Speech recognition, comprehension and speaker identification may well improve, but voice assistants can be addressed by anyone in the family, so all have control. We found that Alexa’s presence generated much shared laughter and teasing, and competition for Alexa’s attention, with a high need for

56

N. YUILL

guidance, because of Alexa’s absence of knowledge about interaction. Families sometimes had to whisper ‘backstage’ to decide collectively how to manage Alexa, without an interruption at each mention of the ‘wake word’ triggering Alexa to join in. At other times, adults or older siblings had to provide structured support for younger children, who found it difficult to understand Alexa’s limitations (for example, the inability to distinguish different speakers or to cope well with multiple speakers and competing demands). Interactions could be quite chaotic, with some teasing and rivalry over control and one person trying to stop another, for example by repeatedly playing a song the other found annoying. Because of the difficulty of managing control, families tended to ‘manage’ Alexa in two ways: sometimes a parent would scaffold a child’s interaction by whispering explanations and guidance in the background, and sometimes families would sit together in front of Alexa and use turn-taking to manage a group interaction, often with conflict or missteps. Voice interaction does enable simpler, more accessible control by voice rather than text input, and it seems that the greater ease of vocal over graphical input might foster better enquiry and discussion. Reicherts and Rogers (2020) compared pairs of adults discussing health data using a ‘conversational user interface’ to support their discussion, where pairs interacted with the digital agent either through typing text into a laptop or speaking as if through a voice assistant. The results were striking: those with the voice assistant explored the dataset more thoroughly, gave more commands, asked more questions and had more turn-taking in their conversations than pairs with the graphical interface. As one participant noted (Reicherts, personal communication) “One big advantage is that we are both in control, whereas in a typical laptop scenario it would either be my computer or his computer”. Another participant suggested that the intelligent voice assistants might be used to support classroom discussions. The interactions we saw in families tended to be very playful, with nothing too important at stake, so the many difficulties, failings and misunderstandings were experienced as funny rather than frustrating. The same tolerance may not be extended when groups are trying to complete a task. On the other hand, we tried getting children to use voice assistants as a low-judgemental audience to practise their joke-telling skills in private, to provide a safe practice area. It will be interesting to see how voice control might be used in education, through an approach that involves stimulating a natural flow of conversation between children, rather than aiming to put high levels of intelligence into a voice device itself.

3

CONTINGENCY AND CONTROL

57

References Aytac, A., & Yuill, N. (2009, July). ‘Your half is bigger than mine’: Motivating children to understand fractions. Frontiers in Artificial Intelligence and Applications, 200(1), 771–772. Beirl, D., Yuill, N., & Rogers, Y. (2019). Using voice assistant skills in family life. In Proceedings of 13th International Conference on Computer Supported Collaborative Learning (pp. 96–103). International Society of the Learning Sciences. Benford, S., Bederson, B. B., Åkesson, K. P., Bayon, V., Druin, A., Hansson, P., Hourcade, J.-P., Ingram, R., Neale, H., O’Malley, C., Simsarian, K., Stanton, D., Sundblad, Y., & Taxén, G. (2000). Designing storytelling technologies to encourage collaboration between young children. Conference on Human Factors in Computing Systems—Proceedings, 28(99), 556–563. Brudy, F., & Marquardt, N. (n.d.). The tabletop is dead? Long live the table’s top! https://fbrudy.net/content/01-projects/15-tabletop-dead-alive/brudytabletop-iss2017.pdf. Constantino-Gonzalez, M., Suthers, D. D., & de los Santos, J. G. E. (2003). Coaching web-based collaborative learning based on problem solution differences and participation. International Journal of Artificial Intelligence in Education, 13(2–4), 263–299. Druyan, S. (2001). A comparison of four types of cognitive conflict and their effect on cognitive development. International Journal of Behavioral Development, 25(3), 226–236. Foley, M. A., & Ratner, H. H. (1996). Biases in children’s memory for collaborative exchanges. In D. Herrmann, M. K. Johnson, C. McEvoy, C. Hertzog, & P. Hertel (Eds.), Basic and applied memory research: Practical applications, 2, (pp. 257–267). Froese, T., & Di Paolo, E. A. (2010). Modelling social interaction as perceptual crossing: An investigation into the dynamics of the interaction process. Connection Science, 22(1), 43–68. Harris, A., Rick, J., Bonnett, V., Yuill, N., Fleck, R., Marshall, P., & Rogers, Y. (2009). Around the table: Are multiple-touch surfaces better than singletouch for children’s collaborative interactions? In Proceedings of the 9th International Conference on Computer-supported Collaborative Learning, 1 (pp. 335–344). International Society of the Learning Sciences. Holt, S., & Yuill, N. (2014). Facilitating other-awareness in low-functioning children with autism and typically-developing preschoolers using dual-control technology. Journal of Autism and Developmental Disorders, 44(1), 1–13. Holt, S., & Yuill, N. (2017). Tablets for two: How dual tablets can facilitate other-awareness and communication in learning disabled children with autism. International Journal of Child-Computer Interaction, 11, 72–82.

58

N. YUILL

Kerawalla, L., Pearce, D., Yuill, N., Luckin, R., & Harris, A. (2008). “I’m keeping those there, are you?” The role of a new user interface paradigm– Separate Control of Shared Space (SCoSS)—In the collaborative decisionmaking process. Computers & Education, 50(1), 193–206. Knoblich, G., Butterfill, S., & Sebanz, N. (2011). Psychological research on joint action: Theory and data. Psychology of Learning and Motivation, 54, 59–101. Legoff, D. B., & Sherman, M. (2006). Long-term outcome of social skills intervention based on interactive LEGO® play. Autism, 10(4), 317–329. Marshall, P., Fleck, R., Harris, A., Rick, J., Hornecker, E., Rogers, Y., Yuill, N., & Dalton, N. S. (2009). Fighting for control: Children’s embodied interactions when using physical and digital representations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2149–2152). ACM. Oatley, K., & Yuill, N. (1985). Perception of personal and interpersonal action in a cartoon film. British Journal of Social Psychology, 24(2), 115–124. Pearce, D., Kerawalla, L., Luckin, R., Yuill, N., & Harris, A. (2005). The task sharing framework for collaboration and meta-collaboration. In Proceedings of the 2005 Conference on Artificial Intelligence in Education: Supporting Learning Through Intelligent and Socially Informed Technology (pp. 914–916). IOS Press. Piper, A. M., O’Brien, E., Morris, M. R., & Winograd, T. (2006). SIDES: A cooperative tabletop computer game for social skills development. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work (pp. 1–10). Reicherts, L., & Rogers, Y. (2020). Do make me think! How CUIs can support cognitive processes. In Proceedings of the 2nd Conference on Conversational User Interfaces (pp. 1–4). ACM. Rick, J., Rogers, Y., Haig, C., & Yuill, N. (2009). Learning by doing with shareable interfaces. Children Youth and Environments, 19(1), 320–341. Rogers, Y., Lim, Y., Hazlewood, W. R., & Marshall, P. (2009). Equal opportunities: Do shareable interfaces promote more group participation than single user displays? Human-Computer Interaction, 24(1–2), 79–116. Rogers, Y., Scaife, M., Harris, E., Phelps, T., Price, S., Smith, H., Muller, H., Randell, C., Moss, A., Taylor, I., Stanton, D., O’Malley, C., Corke, G., & Gabrielli, S. (2002). Things aren’t what they seem to be: innovation through technology inspiration. In Proceedings of the 4th conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (pp. 373–378). Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T., & Vogeley, K. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36(4), 393–414.

3

CONTINGENCY AND CONTROL

59

Scott, S. D., Shoemaker, G. B. D., & Inkpen, K. (2000). Towards seamless support of natural collaborative interactions. In Graphics Interface (pp. 103– 110). Sheingold, K., Hawkins, J., & Char, C. (1984). “I’m the thinkist, you’re the typist”: The Interaction of Technology and the Social Life of Classrooms. Journal of Social Issues, 40(3), 49–61. Sommerville, J. A., & Hammond, A. J. (2007). Treating another’s actions as one’s own: Children’s memory of and learning from joint activity. Developmental Psychology, 43(4), 1003. Tenenbaum, H. R., Winstone, N. E., Leman, P. J., & Avery, R. E. (2020). How effective is peer interaction in facilitating learning? A meta-analysis. Journal of Educational Psychology, 112(7), 1303–1319. Wallace, J. R., Scott, S. D., & MacGregor, C. G. (2013). Collaborative sensemaking on a digital tabletop and personal tablets: prioritization, comparisons, and tableaux. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 3345–3354). ACM. Yuill, N., Pearce, D., Kerawalla, L., Harris, A., & Luckin, R. (2009). How technology for comprehension training can support conversation towards the joint construction of meaning. Journal of Research in Reading, 32(1), 109–125.

CHAPTER 4

Shared Understanding

Abstract Attention, contingency and control work together to support shared understanding and co-construction of ideas. Shared meaning is apparent from interactions with caregivers in infancy, and cooperation in peer play becomes specially frequent around 5–6 years, dependent on the structure of an environment. An augmented toy doubled the incidence of cooperative play, supporting a transition between attention bids and collaboration. More tightly-constrained designs such as SCoSS provide collaboration support through separate control in a shared task space, helping awareness and contingency, providing a shared resource for discussion, and constraining children to reach agreement before carrying on. Collaboration can be achieved in diverse ways, considering individual and cultural differences and loosely- or tightly-controlled joint activity. Keywords Individual differences · Shared understanding · Triadic awareness · Augmented objects · Shared surfaces

So far, we’ve considered factors in technology that support attention and engagement, and control and contingency of actions in a group. These processes, according to the Co-EnACT framework, build towards creating shared understanding, the topic of this chapter: to go back to Roschelle and Teasley’s (1995) definition of collaboration: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3_4

61

62

N. YUILL

Coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem.

The previous chapters addressed technology both for relatively closedended tasks such as classroom organisation and shared reading, and for more open-ended creative activity such as free play and story generation. I argue that generally similar fundamental processes of co-creating meaning apply to play as to problem-solving (acknowledging also the differences between these activities). Göncü (1993, p. 101) defines cooperative play in terms similar to our collaboration definition, as resting on shared meaning that is “constantly changing as a result of continuous knowledge exchange between children”. In this chapter, I look at how the factors we’ve already studied stitch together to support this negotiation of shared understanding. Underlying this chapter, and much of the work cited in it, are the Vygotskian ideas that language provides a tool for thinking, both within and between individuals, and that we develop understanding in the social (inter-psychological) realm, which then transforms our own individual (intra-psychological) understanding, as in the example of understanding fractions in DigiTile (Chapter 3). This perspective also provides a good opportunity to look briefly at our understanding of how children come to learn about collaborative interactions. First I outline what I take shared meaning to involve, and undertake a brief review of psychological research into experiences shaping the development of collaboration in early childhood. Then I return to the Augmented Knights’ Castle (AKC) to show how technology can shape the path between attention and cooperative play towards shared understanding. Next is a look at how imposing constraints on control can support the creation of shared understanding, with an emphasis on the role of dialogue and exploratory talk. The chapter finishes with an acknowledgement of the diversity of activity during collaboration and the range of different ways that children approach it.

4.1 Constructing Shared Meaning: Development Through Early Childhood Co-creating meaning is so much part of our everyday activity that it is sometimes hard to see what an amazing process this is. Looking at the earliest emergence of shared meaning in infancy is a good way to consider what is involved, and why the physical properties of artefacts

4

SHARED UNDERSTANDING

63

matter. Vygotsky describes the infant’s early attempts at grasping objects, often involving failed reaches. The caregiver sees those failed reaches and understands the infant’s desire for the object, so may respond by actions such as bringing the object into reach, animating it and talking about the infant’s wishes. As Vygotsky (1930–1934/1978, p. 56) wrote, “when the mother comes to the child’s aid and realises this movement indicates something, the situation changes fundamentally. Pointing becomes a gesture for others”. Through joint action, a failed reach becomes a meaningful gesture that creates shared understanding between caregiver and infant, and sets up the participants as each having their own intentions and recognising the intentions of the other. There is not a single, unified literature on the development of collaboration, with different research areas addressing particular aspects such as shared attention, or understanding ownership. The research described below applies to a culturally narrow sample of particular groups of children: cultural diversity is addressed later, in Sect. 4.4. Research on joint attention and early communication shows us the early roots of collaborative interaction. Caregiver–child interaction in the first two years of life has been seen as crucial to the development from dyadic awareness (mutual gaze between caregiver and child, for example) to triadic awareness (caregiver and child sharing attention to an object, for example). Not only is each partner attending to an object or experience, and one partner can track the other’s direction of gaze (gaze-following), but each is aware that the other is attending: awareness is joint. Caregivers can provide powerful scaffolding to support this, for example by manipulating objects of shared attention to create captivating sound and movement (Bakeman & Adamson, 1984). In the first 14 months of life, infants may progress from (i) dyadic engagement, primarily sharing emotions and toand-fro behaviour, to (ii) triadic engagement, involving a shared goal, where each partner monitors the behaviour of the other in pursuit of this goal, as in giving and taking games, and then to (iii) collaborative engagement, where actions are coordinated towards a shared goal, roles can be reversed, and one partner may help the other to achieve the shared goal (Tomasello et al., 2005). Brownell (2011) charts how caregivers in this period of early life support the development of cooperation (the term mostly used in this literature). All that is required of the infant before this is engagement, and the adult tends to expect that the infant will disengage whenever they

64

N. YUILL

lose interest or regulatory control. The growing child becomes increasingly able to initiate joint action and to re-engage the adult (Bakeman & Adamson, 1984, 1986), using this by around 18 months of age to cooperate in shared goals, such as anticipating a heavily-laden adult’s need for a door to be opened (Warneken & Tomasello, 2006). Brownell argues that the shift to truly cooperative behaviour is developed through supportive joint interaction with an adult (caregiver), while the ability to act cooperatively with peers emerges towards the end of the second year (Brownell et al., 2006), with the increasing availability of language to understand and describe self and others’ feelings and actions. She gives a specific role to “conscious self-awareness, the ability to reflect consciously on oneself as an object of others’ perceptions, thoughts, and actions and simultaneously as an agent of one’s own similar states and behavior” (Brownell, 2011, p. 196). Measures of such abilities predicted children’s behavioural success in joint action with peers. There is thus clear evidence about when these capacities are first evident, but less about how collaborative competence is developed through childhood in everyday interaction with similar-aged peers, who will most likely not be offering the highly-structured scaffolding that familiar caregivers provide. Not surprisingly, there seems to be a time lag between structured interactions of shared meaning between caregiver and child, and peer collaboration. Children’s earliest peer interactions tend to be in play. Parallel-aware play, showing awareness of others but with little or no contingency, was the most common form of play observed in a study of 166 4–5-year-olds in three different US play centres (Robinson et al., 2003) and in at least 20% of observations of 75 2–4-year-olds in France (Barbu et al., 2011). The latter researchers found that associative play (playing with others, albeit without obvious organisation or structure) became most common between 3 and 5 years, and cooperative play, which involves coordination, and hence some mutual understanding, only topped other modes (almost 50% of the time) by 5–6 years of age. It is worth noting that all forms of play were observed in all age groups, but the frequencies changed. This suggests that different environments, different equipment and different support provided by playmates are an integral part of the development of cooperation in play. This suggestion is well-supported in Garte’s (2020) fascinating observations of collaborative competence in pre-schoolers attending five different New York Head Start programmes. In particular, she showed that higher levels of environmental flexibility, provided for example by multi-functional materials and

4

SHARED UNDERSTANDING

65

potential for freely moving through space, were associated with higher levels of cooperative collaboration in play. This high collaboration often happened in larger groups pursuing collective goals, and this contrasts with the closer interpersonal dynamics of smaller groups, that could even experience disruption when given high levels of flexibility. Children are born into technology-rich worlds and that technology comes with its own set of complex affordances and cultural meanings, as described in the shared e-book reading example in Chapter 1. The question of how joint engagement with digital media affects caregiver – child interaction, and hence development of interactional competences, is a complex one beyond my current scope. There is evidence of some caregivers withdrawing from the scaffolding actions that they might have provided with non-digital media (Ewin et al., 2020: the ‘electronic babysitter’ idea), and parents showing more social control and less reciprocity in tablet-based vs print book interactions with their toddlers (Munzer et al., 2019). New technologies come with new practices and new norms: Is this device meant just for me to use, or to share? If we share it, how can we manage that sharing? Digital technology is often developed from existing artefacts, so we have small screens referred to as tablets, pads or notebooks. A multi-touch tabletop conveys echoes of how we use tables generally, as well as the assumptions we bring to the task or activity displayed on the tabletop. For the Playmobil® figures in the AKC, children come with assumptions about who is allowed to play with them, how long one person can keep hold of a figure and how we negotiate exchanging figures. These cultural meanings appear very early (Huh & Friedman, 2017). Knowledge of what’s expected might be as reliant on experience of a particular setting as on chronological age.

4.2 From Attention Through Contingency and Control to Shared Understanding: The Augmented Knights’ Castle Shared activity is negotiated not just through background assumptions and norms, but also through the nuances of small actions: as Stahl (2003, p. 9) expresses it: collaborating people give frequent feedback to each other through subtle word choices, inflections, gaze, bodily orientations and gestures. When possible breakdowns occur indicating a divergence of interpretation,

66

N. YUILL

explicit discussion will often ensue to the extent needed to restore a sense of shared understanding… A computer environment to support collaborative learning is not a character-less channel of communication, but is itself a complex designed artefact that embodies its own cluster of meanings.

Fine analysis of video helps in identifying how these small steps in interactions unfold over time. Interventions to improve communication, such as Video Interaction Guidance (VIG: Kennedy, 2011), provide a good example of how paying attention to the small detail of interactions helps in understanding the construction of close cooperative relationships. In VIG, attuned interactions are seen as a pyramid structure, with small moments of attention, posture, eye contact and reception, for example, underpinning more complex configurations of shared understanding, such as turn-taking, cooperation and guidance. In line with a VIG approach, I aim to show below how tweaking small movements can be powerful in shifting interactions towards closer collaboration. In Chapter 2, we saw that the AKC supported around twice as much cooperative play in five triads of children, compared to the six groups working with a KC unaugmented toy (Yuill et al., 2014). The focus was on what helped children make a transition from one state of play to another, and how the set design engaged children. One very important factor in the AKC’s success was how joint attention between children translated into cooperative play. The video coding software we used throughout the ShareIT project, Mangold Interact® , provided us with timelines enabling us to visualise how the play states of each child in the group changed over time and in relation to other events and behaviours, as in Fig. 4.1. Attention bids Fig. 4.1 shows what we know already, that cooperative play is more frequent with the AKC than the KC. It also shows that bouts of cooperative play are longer with the AKC than the KC. That gives us a clue as to why the audio-augmented set fosters more cooperative play. The videos exemplified how children often tried to get their peers’ attention, sharing discoveries of a new sound effect or figure, or suggesting a play theme. These bids for attention might be successful or could fail. We did some additional fine coding of a 7-minute section of play, 5 minutes into each play session, once children had settled into the activity. For each session, we coded every occasion when a child made a bid for attention. This might involve holding up a toy to say, ‘Look at this’, seeking eye contact or smiling at a partner, making a sound or

4

SHARED UNDERSTANDING

67

Fig. 4.1 Example INTERACT timelines (horizontal axis) for each child (1, 2, 3) and play type (onlooker, parallel, solitary, cooperative, respectively top to bottom on vertical axis) for a KC (upper) and AKC (lower) group (Note The lowest three rows show patterns of cooperative play over time for each child in the triad with more frequent and longer bouts in AKC than KC. Parallel play dominates the KC timeline)

speaking for a character, or suggesting a storyline. For each bid, we coded whether it succeeded in getting other children’s attention. Attention bids, whether verbal or non-verbal, were more likely to be successful in the AKC condition than in the KC: 68% vs 52%. Further, a higher success rate in getting attention predicted longer bouts of cooperative play in the AKC, but not in the KC condition. In contrast, in the KC, a failed attention bid tended to lead to relatively long bouts of solo play. This suggests a sort of snowball effect: a small action to get attention is boosted in the AKC condition, and acts to join up cooperative play into longer sequences of action, whereas a failure in the KC leads to disengagement. Narrative co-construction: Could this snowball effect in itself contribute to the construction of shared understanding? We can investigate this from the final task we gave our groups: after the end of the free play session, we asked each group to plan and enact a short performance with the figures. This was a difficult task: we gave little support for narrative construction and little time or materials for planning. It is closer to problem-solving than the free-play part of the session, despite the end goal being only broadly defined. This means that children had to synchronise their ideas, taking into account each person’s contributions and weaving them into a coherent whole. Here we come back to

68

N. YUILL

the ideas of Vygotsky, that language is a tool for constructing knowledge: it serves here both for children to articulate and expand their individual ideas and as a means of sharing ideas to co-create a narrative. We rated the resulting plays for ‘creativity’, adapting from a standard measure involving fairly intuitively-defined scales such as imagination, coherence and novelty. (Note this is a relatively weak analysis because we are now comparing just scores for six groups of children versus five). Again, the AKC condition came out well, with a slightly but significantly higher creativity score than the KC. Across the study as a whole, the groups with the more creative narratives showed higher levels of cooperative play in the previous play session, and lower levels of parallel play than the lower-creativity groups (fuller detail in Yuill et al., 2014). We coded the language of the play narratives in each condition to get a better sense of why the AKC narratives tended to be more highly rated. First, we analysed what sort of statements children made, using a scheme adapted from Cassell and Ryokai (2001). There are three kinds of utterances children made. They can act as narrator, moving the story along, such as ‘The queen wants to find her crown’; they can talk in character, ‘Oh, where is my crown?’; or they can address their peers with suggestions or debate about the activity as meta-narrator: ‘Let’s make the queen lose her crown’. The biggest difference in these types of utterance was that in the KC condition, children spent most of the time making metanarrator comments, essentially debating without resolution what the play would be about. For example, KC Group 4 spent a long time discussing who the characters were: Greg: Is he a baddie or goodie? Ian: That’s not a baddie, that’s a goodie Harry: That’s a baddie Ian: Goodie, goodie, goodie...

And in another group: Ailsa: Oh what about the fairy? Bella: Get the fairy then Charlie: There was no fairy Ailsa: Just pretend there was the fairy in it though

4

SHARED UNDERSTANDING

69

In contrast, in the AKC condition, the three different types of speech were evenly balanced, with about one-third of each making up the whole conversation. The AKC groups were more likely to be co-creating a relatively coherent narrative, making suggestions for other children and providing openings for new story ideas and action. In this extract from Group 5, a missing crown suggests a search. Although Ali provides much of the narrative drive and Chris is relatively passive (though compliant), Ali and Ben provide guidance for Chris that means Chris takes a role in the story: Ben: When the village heard that the crown was missing Ali: The crown’s gone missing everybody, me and my wolf will go over there and have a look, come on wolf Ben: And [Chris], you need to be the king now, say, you say, what happened? Ben: What happened to the castle, you say they built it new again Ali: What happened to the castle? Chris: They built it new again Ali: You say, tell the story that happened 10 years ago, [to C] Chris: 10 years ago we had a big war and then they stole my crown Ben: No, no, you don’t say they stole my crown, you say I lost my crown under a tree somewhere Chris: I lost my crown under a tree somewhere Ali: Can we help you find it? Chris: Yes Ali: I will send my wolf to have a look around

The narratives were short and produced with little support, but they also seem to illustrate the three different types of talk identified in the Spoken Language and New Technology (SLANT) project (e.g. Mercer & Littleton, 2010). KC Group 4’s’ baddie-goodie’ conversation, shown above, involves assertion of meaning, but no resolution: it is very much the sort of ‘Yes it is, no it’s not’ dialogue, defined as disputational talk. AKC Group 5, immediately above, seems to reach the status of cumulative talk described in SLANT as supporting cooperation (as distinct from collaboration), involving sharing ideas and confirming others’ views. Rarer in this very short group exercise, but present in AKC Group 1 (below), is exploratory talk, which Mercer and colleagues see as a crucial part of collaborative activity. It involves children evaluating ideas, considering options and making constructive challenges. A possible reason why AKC stories were rated as more creative is this sparking of ideas, with the trust and cooperation needed to reach some sort of agreement.

70

N. YUILL

Admittedly, the example below is not the most complex, but suggests a willingness to negotiate meaning: Del: We need to go somewhere safe where no one will find us Fi: We should lock the door so we can get like safe […] Del: Now close the door before anyone knows Fi: But we need to find the locker for this door, haven’t we? Del: It’s ok we’re safe Fi: Well what happens if they come through, my queen? Del: They couldn’t they can’t, it’s high Fi: Yeah but what happens if they climb up the step? Ellis: No but you don’t know that there’s gonna be a… Del: We don’t—we don’t have stairs, we don’t know what’s gonna happen, they might even not fight us and give up. So just lock the doors and that’s it Fi: And lock the windows so they won’t burn a fire of the dragon

My suggestion about how the technology supports more cooperative play runs as follows: when a child makes a bid to gain another child’s attention, verbally or non-verbally, it is slightly more likely to be successful in engaging the peer’s notice in the AKC than the KC condition because of the potential for a boost to interest from a sound effect. This increase in response from peers can then lead to further interaction, and success in keeping mutual attention, leading to longer bouts of social play where story ideas can be tried out on the basis of shared attentional focus. This seems likely to provide a stronger foundation for building shared meaning than the less frequent and shorter bursts of cooperation in the KC groups. Audio augmentation → higher success of attention bids → longer cooperative bouts → more balanced narratives → more creative stories The SLANT group’s work on collaborative talk provides evidence that this kind of dialogue can be increased through training, and that it supports better group performance as well as improved individual performance, supporting their Vygotskian-inspired view about the role of language in building understanding, shared and individual. Clearly, the technology will never do all the work required to stimulate exploratory talk and shared understanding: I address the role of training more fully in Chapter 5.

4

SHARED UNDERSTANDING

71

What are the implications for how technology can support construction of shared meaning, particularly given that the AKC technology was a hand-built one-off, and very likely benefited from the effect of being novel? There is nothing wrong with novelty for engaging groups, but of course it can’t last. I use the AKC as an example, within the CoEnACT framework developed earlier, to show that even a modest boost to shared attention is a building block for sustained bouts of cooperative play and together creating shared understanding. That attention might instead come from a skilled instructor, through violating expectations (Alcorn et al., 2014), using humour, or through different sensory means such as sound or touch. Providing an attention-grabber that is controllable by children and readily shared (as in audio-augmented figures) provides a step into collaborative play, as evidenced in the transition sequences shown in Chapter 2. Thus, careful consideration of how children can manage small moments of shared attention in collaborative groups should pay dividends for collaborative working. Chapter 5 provides further examples of technology to support attentional control for teachers to use in larger classroom groups. Support for shared storytelling has been a fruitful area for collaborative technology design, and an example that shows feasibility of use in ordinary classrooms is the TellTable app designed for the then commercially available multi-touch table, Microsoft Surface™ (Cao et al., 2010), which was deployed in a primary school library for several weeks. Particularly valuable is the way that the authors demonstrate how collaboration was supported both in the moment, with possibilities such as creating characters by drawings and photographs of children and artefacts present, and also over time and space, with characters from all stories in a shared database for others to use and adapt, and the completed stories viewable on the library laptops. For example, children might need to help each other create a large background area, perhaps using a photo of the library carpet for ‘grassland’, and they developed a community of stories, with some children gaining a reputation through the school and stories being adapted and re-developed by other groups. The authors noted how the table attracted groups, rather than children working alone, partly because of the number of children wanting to use it in the limited deployment period, but also because children recognised the extra potential and fun in co-creating. Children who got the opportunity to use it alone reportedly recruited or waited for others to join them. There seemed to be a honeypot effect, in that children could watch others creating stories and

72

N. YUILL

hence learn about the possibilities. Children might start planning days ahead, bring objects in from home to create new characters or objects, and replay stories on other devices, becoming conscious that the stories they created themselves would have an audience, and even a fanbase. Using their own objects, self-images and voices was a further motivating factor supporting strong engagement and construction of shared meaning. The TellTable technology set-up hence enabled mechanisms of encouraging engagement, shared attention and simultaneous control. The technology available in schools since TellTable was developed allows for many of the functions described, and children are very often confident and experienced in recording and curating images and sounds. Of course, the study showed examples of disagreement, undoing other children’s work and spirited disruption, but any frustrations caused in groups where children had less-developed collaborative skills seemed compensated for by the high degree of engagement reported across the school community and over time. There are good reasons why technologies might usefully enable shared control in ways that sometimes cause conflict. I now turn to an example where control is more tightly constrained, and actions pushed to be more closely contingent, to illustrate how children might be helped to experience contingency in action, supporting the construction of shared meaning.

4.3 Negotiating Shared Understanding: SCoSS Again We looked earlier at children co-constructing narratives: this is a complex task and there is no right or wrong answer. However, much of the collaboration in problem-solving, in school settings especially, requires children to work out correct solutions to complex tasks through jointly developing greater understanding of the concepts involved, as in the example of DigiTile for fractions, in Chapter 3. I noted there the dual roles of dialogue in supporting shared and individual understanding. Dialogue itself is a collaborative process, as are the other more physical activities of moving objects on screens. Connected conversation involves one utterance relating in some way to a previous one, thus having contingency. Chapter 3 investigated how control can be used to support close collaboration, using the Separate Control of Shared Space (SCoSS) structure. We showed there that pre-schoolers demonstrated more contingent

4

SHARED UNDERSTANDING

73

behaviour in a SCoSS set-up than in a standard shared-control one. The main aim in designing SCoSS was to support better collaboration through software design, with support for the continued negotiation of shared meaning described by Roschelle and Teasley (1995) in their definition of collaboration. To reprise the rationale behind this in terms of the Co-EnACT framework, it involved participants in a joint activity having duplicate linked representations of a whole task to support awareness and create contingency, and separate control of their own actions, with indicators to show where they agree or disagree with their partner, supporting construction of meaning together. In shared control, children are able to undo others’ work and it’s possible for one person to dominate the interaction. In separate control, though, groups may end up working cooperatively on separate subtasks rather than co-constructing shared understanding. SCoSS provides visible but not dual-controllable spaces, which makes each person’s actions perceptually salient to the other but difficult for children to undo another’s work. It can be implemented as dual mice on a shared computer screen, user identification on a multi-touch table, or separate but WiFi-connected tablets. Child A’s mouse just does not work in Child B’s space on the computer screen, or A’s finger in B’s space on a multitouch table, and in providing separate but physically adjacent tablets, Child A would have to be fairly pushy to lean over and use Child B’s tablet. Having seen hundreds of children using SCoSS in a wide range of schools, we’ve found this pushiness happened only rarely, and it’s clear to both children that this is a transgression, even though we don’t explicitly say they shouldn’t do this. The shared representation of task states in SCoSS shows where children agree or disagree, acting as a clear visual resource for discussion of viewpoints. The final constraint we provide is to require agreement at particular points, before a new resource is provided, enabling the players to continue the activity. This means that players can’t continue independently throughout doing different things, and this can be especially helpful if one person is working faster than the other, because they have to pay attention to and negotiate to reach agreement with the partner. If this constraint were imposed by one partner refusing to move on, collaboration would most likely be lost. How do we know that the SCoSS design supports good shared understanding? First, we have some quantitative data that shows pre-school children behaving more contingently in a simple picture-sorting task, as

74

N. YUILL

discussed in Chapter 3 (Holt & Yuill, 2014). We also found that 7–9year-old children doing a more complex sorting task in pairs made more sophisticated joint explanations in a SCoSS version of the task than a two-mouse, single-representation version (Yuill et al., 2009). An example illustrates this point. Fred and Callum are trying to sort 16 words into a 2 × 2 table, with four words to go in each box (see Fig. 4.2: a fuller account of how SCoSS supports collaboration is available in Kerawalla et al., 2008). They get the words one at a time in the grey box and don’t know in advance how the words are to be classified. In fact, half the words begin with T, half with CH, and half are words for food, half are body parts. This combining of surface form of words (initial letter) and word meaning (semantic category) is a ‘reading multiple classification’ (RMC) task, a particularly challenging problem for children of 7–9, and an excellent predictor of reading comprehension skill (Cartwright et al., 2017), since it requires the skill of combining lexical and semantic features of words. Here is a part of their conversation on two further problem sets, noting the times they mention the two dimensions of form and meaning:

Fig. 4.2 The WordCat task showing one player’s task state

4

SHARED UNDERSTANDING

75

Callum: Finger should go in there because it’s another part of the body [meaning] Fred: And it begins with f [form]

... Callum: Lettuce would go under lolly because it’s l [form] Fred: Food! [meaning] Callum: And it’s food [meaning]

...[next task] Callum: Lung would go in there because it’s part of the body. [meaning] Fred: And begins with l [form]

[Towards the end of the session, Fred suddenly is able to describe both form and meaning, which he does emphatically] Fred: Chick would go in there because it begins with ch AND it’s a type of bird [form; meaning] Callum: Yeah... Fred (excited): Yes, AND! [highlighting the need to mention both dimensions]

The task is difficult, so the two boys have ended up each focusing on a single criterion, form or meaning, which divides the cognitive load. It’s very difficult for a single child to focus on both at once but the partner provides a ‘second brain’: inter-psychological knowledge in Vygotsky’s terms. Finally, Fred manages to articulate both features in a single utterance, and his sense of the complexity of this is shown by his excited emphasis on the single word ‘AND’: he has managed to combine his own and his partner’s role in the task intra-psychologically, to grasp, individually, the complexity of both dimensions. Although it’s a very different task from creating a play narrative, the process of creating something together that is more than one can produce alone, and thereby increasing individual understanding, is a defining feature of collaboration, whether through tangible technology supporting shared attention or software set-ups that constrain control to enable discussion.

76

N. YUILL

4.4 Diversity and Individual Differences: Other Forms and Means of Collaboration Throughout this book I have adopted the Roschelle and Teasley approach to collaboration, involving tightly-coupled actions with the group negotiating shared meanings. This approach is underpinned by the idea that groups working together show convergent conceptual change. It derives from, and is concordant with, both Vygotskian and embodied approaches, in its focus on participants coming to share understanding, and its emphasis on the way that the tools being used shape this process. It’s important to recognise that there are many individual and situational differences in how collaboration works. Tissenbaum et al. (2017) rightly pointed out that people move between individual, parallel and more loosely-coupled interactions, and that collaboration can also create new understanding in more informal, hands-on and self-chosen environments than in school, such as at science fairs and museums. Each person might learn something different and learning is often exploratory, involving ‘tinkering’, rather than being curriculum-driven and solution-focused. Groups in these settings are often family-based, so include children of different ages and adults. Tabletops, and designs such as SCoSS, have tended to be used in tightly-coupled ways in formal situations, often enforcing collaboration through constraints. However, they can also support more loosely-coupled exploration, with users working sometimes individually or in parallel and sometimes together. Tissenbaum et al. describe a museum tabletop exhibit, Oztoc, which involved working out how to build electrical circuits. Despite having different goals, the visitors to the exhibit related their own explorations to the tinkering of others, with each person learning about their own goals through watching, and sharing experiences, suggestions and advice from the periphery, or from different groups. Divergent collaboration also needs considering with groups who might otherwise find typical close collaboration difficult. Lechelt et al. (2018) described how a teacher supported loose collaboration in a special needs classroom by getting students, many of them autistic, to build and learn to program physical cubes containing sensors. The observations revealed ‘waves of collaborative interaction’ within and between groups, with students sharing successes, watching and imitating others, and being encouraged to explain to other groups how they had achieved particular

4

SHARED UNDERSTANDING

77

effects. Teacher scaffolding was a crucial part of this process. Chapter 6 focuses on collaboration in autism. Even within convergent collaboration, there are different ways that children can work together, depending on their individual styles and aptitudes. For example, we noticed a range of ways that pairs of children worked together solving fraction problems in the DigiTile multi-touch table application (Rick et al., 2011). We contrasted three different pairs of children finding ways to make tile patterns comprising different fractions of specific colours. One pair, Chris and Dave, chose to work together, one colour at a time. Chris was more talkative and emotionally volatile, while Dave tended to be quieter, but took the lead on physically moving pieces. The pair tended to take turns, with the one observing remaining closely engaged, as shown by their comments on actions. DigiTile works well for taking different roles, as one person can move tiles while the other watches the effect on the display that constantly updates the fractions of each colour. Their session looked like a classic case of collaboration, sharing focus on the task, alternating between acting and reflecting, taking complementary roles and aiming for a shared goal. In contrast, Amy and Ben split the task by each working on a separate colour. While this might seem like a ‘cooperation’ pattern, they communicated closely, describing aloud their own actions, keeping an eye on the other’s actions and sometimes mirroring each other’s movements and copying successful actions. A third pair, Emily and Ford, showed a pattern that seemed less collaborative. They worked independently and mostly in silence, and rarely paid attention to what the other was doing. For example, Emily finished her part and just stopped. After a pause, Ford noticed and asked if she had finished, to which she nodded, and Ford continued working. Emily, who showed better understanding of the task than Ford, did then explain how she solved the problem, and Ford followed her guidance. Later on, Ford solved a problem, possibly through luck. Emily, noticing, silently used the same idea to finish her part. This diversity provides some useful pointers. First, all three pairs showed some of the features of collaboration such as awareness and imitation, though they varied in the extent to which they did so. Second, the different characteristics of each child affected their interaction. For example, Chris and Dave were complementary in their willingness to lead or to follow, as well as in their different emotional styles, which created a harmonious and enjoyable interaction. Emily and Ford were quite far apart in their level of initial understanding, meaning that where they did

78

N. YUILL

interact, Emily (apparently somewhat reluctantly) took a guiding role. In addressing this diversity, we could decide either to add further constraints to manage interaction (as for example in the SCoSS architecture) or to have a more open-ended structure (such as the AKC) and support the children to manage their own joint activity. DigiTile also gave us a useful lesson about design: we had in fact designed two versions of the task, one where each child had different colours to work on (so dividing the task, in the hope of encouraging shared working) and the other where each child had access to all colours. We found no difference in patterns of joint action between the two conditions, suggesting that variability in what the children brought to the task outweighed any differences this particular design feature could produce. Finally, I must acknowledge that the work considered here involves a limited subset of children in a narrow range of settings. There is not enough work on the cultural context of children’s collaborative working through technology, and there are some intriguing differences in dimensions such as fluidity of movement and physical intervention, as shown in a fascinating study comparing tabletop collaboration between young people in the UK, India and Finland (Jamil et al., 2017). More broadly, Rogoff and her colleagues in particular have criticised Western urban-centric assumptions about collaboration in the literatures, and posited the superiority of collaborative skills in Mexican-heritage US children, with evidence of a link between lower collaboration and greater experience of Western schooling (Alcalá et al., 2018). Mejía-Arauz et al. (2018) distinguish negotiation from collaboration. Negotiation is presented as a primarily Western individualist cultural practice in which separate subskills such as joint attention, imitation, gaze-following and pointing are acquired to create a mind-reading capacity, enabling children to take others’ perspectives and hence work together, in line with the dominant perspective provided. Such an approach, the authors argue, leads to a style of negotiation in which turn-taking, verbal explanation and ‘quiz’ style questioning (where the questioner knows the answer and is seeking to discover if the recipient also knows) are common. Collaboration, in the Rogoff group’s work, seems more akin to the Roschelle and Teasley characterisation, though these two literatures do not interact: “not a negotiation between different perspectives” as “the sum of individual contributions” but “a coordination of different roles and resources, mutually organised by the activity” (all Mejía-Arauz et al., 2018, p. 118), with the sort of “dynamic and reciprocal adaptation” (ibid., p. 121) described by Roschelle and

4

SHARED UNDERSTANDING

79

Teasley. Rhythm, reciprocity and synchrony are all seen as relevant in this model, and I return to these aspects when looking at different modes of collaboration in Chapter 6 on autism.

References Alcalá, L., Rogoff, B., & Fraire, A. L. (2018). Sophisticated collaboration is common among Mexican-heritage US children. Proceedings of the National Academy of Sciences, 115(45), 11377–11384. Alcorn, A. M., Pain, H., & Good, J. (2014). Motivating children’s initiations with novelty and surprise: Initial design recommendations for autism. In Proceedings of the 2014 Conference on Interaction Design and Children (pp. 225–228). Aarhus Bakeman, R., & Adamson, L. B. (1984). Coordinating attention to people and objects in mother–infant and peer–infant interaction. Child Development, 55, 1278–1289. Bakeman, R., & Adamson, L. B. (1986). Infants’ conventionalized acts: Gestures and words with mothers and peers. Infant Behavior and Development, 9(2), 215–230. Barbu, S., Cabanes, G., & Le Maner-Idrissi, G. (2011). Boys and girls on the playground: sex differences in social development are not stable across early childhood. PloS One, 6(1), e16407. Brownell, C. A. (2011). Early Developments in Joint Action. Review of Philosophy and Psychology, 2(2), 193–211. Brownell, C. A., Ramani, G. B., & Zerwas, S. (2006). Becoming a Social Partner With Peers: Cooperation and Social Understanding in One- and Two-YearOlds. Child Development, 77 (4), 803–821. Cao, X., Lindley, S. E., Helmes, J., & Sellen, A. (2010). Telling the whole story: anticipation, inspiration and reputation in a field deployment of TellTable. Proceedings of the 2010 ACM conference on Computer supported cooperative work, ACM, 251–260. Cartwright, K. B., Bock, A. M., Coppage, E. A., Hodgkiss, M. D., & Nelson, M. I. (2017). A comparison of cognitive flexibility and metalinguistic skills in adult good and poor comprehenders. Journal of Research in Reading, 40(2), 139–152. Cassell, J., & Ryokai, K. (2001). Making Space for Voice: Technologies to Support Children’s Fantasy and Storytelling. Personal and Ubiquitous Computing, 5(3), 169–190. Ewin, C. A., Reupert, A. E., McLean, L. A., & Ewin, C. J. (2020). The impact of joint media engagement on parent–child interactions: A systematic review. Human Behavior and Emerging Technologies, 3(2), 230–254.

80

N. YUILL

Garte, R. (2020). Collaborative Competence during preschooler’s Peer Interactions: Considering Multiple Levels of Context within Classrooms. Integrative Psychological and Behavioral Science, 54(1), 30–51. Göncü, A. (1993). Development of intersubjectivity in social pretend play. Human Development, 36(4), 185–198. Holt, S., & Yuill, N. (2014). Facilitating Other-Awareness in Low-Functioning Children with Autism and Typically-Developing Preschoolers Using DualControl Technology. Journal of Autism and Developmental Disorders, 44(1), 1–13. Huh, M., & Friedman, O. (2017). Young children’s understanding of the limits and benefits of group ownership. Developmental Psychology, 53(4), 686. Jamil, I., Montero, C. S., Perry, M., O’Hara, K., Karnik, A., Pihlainen, K., Marshall, J., Jha, M., Gupta, S., & Subramanian, S. (2017). Collaborating around digital tabletops: Children’s physical strategies from India, the UK and Finland. ACM Transactions on Computer-Human Interaction (TOCHI), 24(3), 1–30. Kennedy, H. (2011). What is video interaction guidance (VIG)?. In M. Landor, L. Todd & H. Kennedy (eds) Video Interaction Guidance: A RelationshipBased Intervention to Promote Attunement, Empathy and Wellbeing. Jessica Kingsley, 2042. Kerawalla, L., Pearce, D., Yuill, N., Luckin, R., & Harris, A. (2008). “I’m keeping those there, are you?” The role of a new user interface paradigm— Separate Control of Shared Space (ScOSS)—In the collaborative decisionmaking process. Computers & Education, 50(1), 193–206. Lechelt, Z., Rogers, Y., Yuill, N., Nagl, L., Ragone, G., & Marquardt, N. (2018). Inclusive computing in special needs classrooms: Designing for all. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 11–22. Mejía-Arauz, R., Rogoff, B., Dayton, A., & Henne-Ochoa, R. (2018). Collaboration or negotiation: Two ways of interacting suggest how shared thinking develops. Current Opinion in Psychology, 23, 117–123. Mercer, N., & Littleton, K. (2010). The significance of educational dialogues between primary school children. Educational Dialogues: Understanding and Promoting Productive Interaction, 271–288. Munzer, T. G., Miller, A. L., Weeks, H. M., Kaciroti, N., & Radesky, J. (2019). Parent–toddler social reciprocity during reading from electronic tablets vs print books. JAMA Pediatrics, 173(11), 1076–1083. Rick, J., Marshall, P., & Yuill, N. (2011). Beyond one-size-fits-all: How interactive tabletops support collaborative learning. Proceedings of the 10th International Conference on Interaction Design and Children, ACM, 109– 117. Robinson, C. C., Anderson, G. T., Porter, C. L., Hart, C. H., & Wouden-Miller, M. (2003). Sequential transition patterns of preschoolers’ social interactions

4

SHARED UNDERSTANDING

81

during child-initiated play: Is parallel-aware play a bidirectional bridge to other play states? Early Childhood Research Quarterly, 18(1), 3–21. Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem solving. In Computer Supported Collaborative Learning (pp. 69–97). Springer. Stahl, G. (2003). Meaning and interpretation in collaboration. In B. Wasson, S. Ludvigsen & U. Hoppe (eds), Designing for change in networked learning environments, Springer, 523–532. Tissenbaum, M., Berland, M., & Lyons, L. (2017). DCLM framework: Understanding collaboration in open-ended tabletop learning environments. International Journal of Computer-Supported Collaborative Learning, 12(1), 35–64. Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675–691. Vygotsky, L. S. (1930–1934/1978). Mind in society: The development of higher psychological processes. In M. Cole, V. John-Steiner, S. Scribner, & E. Souberman (Eds.). Cambridge, MA: Harvard University Press. Warneken, F., & Tomasello, M. (2006). Altruistic helping in human infants and young chimpanzees. Science, 311(5765), 1301–1303. Yuill, N., Hinske, S., Williams, S. E., & Leith, G. (2014). How getting noticed helps getting on: Successful attention capture doubles children’s cooperative play. Frontiers in Psychology, 5, 1–10. Yuill, N., Pearce, D., Kerawalla, L., Harris, A., & Luckin, R. (2009). How technology for comprehension training can support conversation towards the joint construction of meaning. Journal of Research in Reading, 32(1), 109–125.

CHAPTER 5

Collaborative Technology in the Classroom

Abstract Children enact and develop classroom collaboration through dialogue, and technology alone cannot make this happen. Despite high spending on technology in schools, the focus is more on content-sharing and subject knowledge than on tools for fostering peer collaboration. Interactive whiteboards bring benefits of awareness, control and shared space, but not always peer collaborative working. Tablets can lead to isolated working if used 1:1, but attention to awareness, control and contingency can mitigate this. Training and a culture of collaborative working, careful composition of student groups and separate shared displays can also help. Group size, screen size, experience, training and linking across devices can all support collaboration. Keywords Dialogue · Whiteboards · Tablets · Culture · Group composition · Orchestration

In this chapter, I turn to how technology is used in the classroom to support collaboration. After a brief look at attitudes to technology use in schools, I then organise the chapter by type of technology, in relation to the Co-EnACT collaboration framework already described. Schools have a diverse range of technologies, but the large majority have two forms of technology available in most classrooms: interactive whiteboards (IWBs) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3_5

83

84

N. YUILL

and small screens, whether those are tablets, laptops, desktops or a combination of all of these. These screen technologies play a large role in many children’s school lives, as well as being familiar to most in their home and social lives too. I then move to the conversational side of collaboration, by looking at interventions that seek to train and support children in collaborative peer discussion. Dialogic talk is a primary means whereby collaboration is enacted and developed in the classroom, and technology alone cannot make it happen. It also involves factors such as school and family culture, social relations and group size. I then look at how we might use technologies with each other to augment their value—crossdevice collaboration—with some examples of such technology used in classrooms, suggesting new possibilities for supporting collaborative work through technology.

5.1 Technology and Collaboration in Schools: A Pessimistic Picture Peer collaborative working has long been considered an important feature of classroom activity, as we saw in the section on dialogic talk in Chapter 4. This is internationally recognised in a recently-added category of the international education comparison Programme for International Student Assessment (PISA, 2019, p. 1), called “Collaborative Problem Solving: the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills and efforts to reach that solution”. According to a 2019 UK survey by Promethean, a commercial supplier of large displays and lesson-planning tools, almost 40% of senior education leaders surveyed said they wanted to use technology to increase collaboration in school (Promethean, 2020). Despite this, there seems little reason to be optimistic about the place of technology in supporting children’s face-to-face collaboration. There are two factors here: the particular focus of commercially-available technology, and lack of support for teachers’ roles. Examples given in the report focus on technology for online staff meetings and cloud-based learning (an undisputed benefit of technology in schools), rather than on children’s face-to-face collaboration. The most frequent uses of technology reported in the survey were for remote working and making content centrally available: clearly important, but not directly concerned with children collaborating in the

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

85

classroom. The second reason for pessimism about technology-mediated classroom collaboration is in staff support. Only 16% of teachers in the survey said they had received adequate training in education technology. This is alongside concern about teachers’ already high workloads. Almost all respondents reported use of IWBs, laptops, desktops and tablets, but their role was often framed in terms of the need for children to learn about technology for their future careers, rather than in supporting the development of processes such as collaborative learning. This is against a background of school surveys citing fundamental handicaps such as lack of hardware and software, poor connectivity and limited teacher training (National Literacy Trust, 2019), despite a figure of almost £900 m being quoted as annual spend on technology in UK schools (Blundell, n.d.). Research reviews of technology use in schools have a similarly pessimistic tone, especially in relation to collaboration. For example, Hornsby (2010, p. 6) said: “teacher-centric approaches to design have resulted in technology being unable to accommodate… multiple access to shared resources, distributed participation, and structured collaborative reflection”. Higgins and Siddle (2016, p. 270) offered an equally sobering assessment of the overall role of technology in education: “essentially, there is no convincing evidence for the general impact of digital technology on learning outcomes” and this from a group that has conducted some of the most inventive work on using shared surfaces collaboratively in classrooms (e.g. Higgins et al., 2011). As these authors argued, the poor outlook reflects, in part at least, the ways that technology is used. In a review of classroom technology, Major et al. (2018, p. 2014) noted: “the fate for much technology that is ‘parachuted’ into schools is that it will be used to support existing pedagogies, or that surface features will be used to ‘keep students happy’”. Despite this bleak picture, the ways that existing technologies in classrooms are used do affect how well children can collaborate, as laid out in the review of research below. It’s worth noting too that ingenious teachers find ways to work round structural and equipment difficulties: I have seen inspiring individual examples of tech enthusiasts developing creative ways of using technology collaboratively in classrooms, and experimental technologies used inventively in research (Lechelt et al., 2018; Rogers et al., 2017). I first turn to how the technologies common to most UK classrooms might encourage the features identified earlier as constituting collaboration: IWBs and tablets.

86

N. YUILL

5.2

Interactive Whiteboards

Major et al. (2018), in their scoping review of studies into digital technologies and classroom dialogue, summarised ways that interactive whiteboards can support collaboration in principle. IWBs present different perspectives in a large shared space, allow co-creation of artefacts that can then be more widely shared with others, support collaborative reflection through actions such as shared annotation, and provide the facility to co-reference specific items by enlarging or focusing and moving items around freely to share content easily—all activities that could be done with pen and paper, but with more efficient archiving and tracing of activity, easy editing and a ready window onto the wider world of the internet, including social networking through blogs, video channels etc. Of course, there are associated limitations, such as difficulties in inputting text accurately, differences in users’ access and familiarity with software, technical failure and administrative problems, such as recalling passwords or managing access to devices. For collaborative processes in terms of the Co-EnACT model, IWBs therefore readily support shared awareness, provide space for contingent actions on the same objects and afford relatively unconstrained shared control, with the associated challenges of how to regulate input and awareness. IWBs provide a similarly large shared area as tabletops (Chapter 3), but the difference in orientation, vertical or horizontal, alters how collaboration works. Rogers and Lindley (2004) compared groups of adults working on a garden design task using large tabletop touch surfaces in these two different orientations. They noted that groups found it harder to collaborate using the vertical (IWB) surface. A horizontal surface enabled better awareness of what others were doing, as the three sat close to each other, whereas the vertical IWB condition usually involved one person standing writing at the board, with their back to the other two (so lower mutual awareness), who were seated in chairs by a small table. Group members transferred control between themselves more often in the horizontal condition than in the vertical. There was a stylus to use on the surface, and one person would use it then place it back on the table, perhaps to do some writing on paper or to point to something: this marked the stylus clearly as available for the others to use. For the vertical surface, one specific person tended to spend most of the session at the board with the stylus, either making most of the decisions or waiting to be told what to do by the other two people.

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

87

Of course, IWBs in real classrooms differ in their physical and cultural properties from an experimental set-up. There tend to be several pens, which could support user identification, and locations to put them while not in use. However, there can be a sense that the IWB is the ‘teacher’s space’: perhaps an example of ‘legacy bias’ (Plank et al., 2017), as the board tends to replace a chalkboard, placed at the front of the class where the teacher resides. Further, the mounting of an IWB usually renders large parts of the surface inaccessible to smaller children. Mercer et al. (2010) described groups of three children using a whiteboard used for discussion in science lessons. Children had previously been given training in discussion, with rules of talk. The authors noted how useful the shared space was, while acknowledging that similar features would be available with a low-tech resource such as paper. Even so, the groups clearly experienced the disadvantages of vertical space, in the imposition of turn-taking to manage shared control, the unequal access to space for children of different heights and in the apparently inevitable technical challenges, when they had to seek the help of the teacher. Davidsen and Vanderlinde (2016) provided an instructive illustration of the fact that just putting some large-screen technology into classrooms will not make collaboration happen. They describe incidences of fairly conflictful interactions in pairs of 7–8-year-olds in two Danish classrooms equipped with a large IWB plus some 23-inch touchscreens for pupils to work round in small groups. The researcher and teacher worked together to address these problems. Higgins (2010) observed whiteboard use in primary classrooms over several weeks, across two successive school years, and found interesting differences from lessons without use of the IWB. IWB lessons had more whole-class than small-group work, but the whole-class work with IWBs tended to have more evaluative talk from the teacher and longer turns by pupils. IWB lessons were faster-paced, hence producing a generally stronger ‘flow’ in lessons. Both teachers and pupils spoke very positively about IWBs in class, though Higgins suggests there is no clear evidence for any improved learning outcomes. A similar conclusion results from a systematic review of research into IWBs and learning, by Kyriakou and Higgins (2016, pp. 270–271): their conclusion: reflects the IWB’s complex potential and how a single technological device can be exploited in such diverse ways. The indications are that it is not merely about the technology and its uses, but about aligning its use with more effective and dialogic approaches to teaching. Dialogic teaching,

88

N. YUILL

of course, does not require technology… [T]here is greater potential in ICT to support dialogic teaching than witnessed presently, underpinning the need to shift towards a more active role for learners in orchestrating resources to support their own learning.

5.3

Tablets

As with IWBs, tablets are sometimes cited as supporting collaborative learning, but again, the meaning of this claim varies widely, and often does not refer to face-to-face peer collaboration but to other forms of co-working, such as the ability of teachers to collaborate on assessment or performance recording, or the ability to work online (e.g. see Heinrich, 2012). School resources and practices in relation to tablets vary quite widely: Mangafa et al. (2016) reported schools using tablets as rewards for work, motivators to direct students’ attention, for practising turn-taking and waiting, and for delivering specific curriculum materials. Without denying the unavowed benefits of tablets as individual tools for accessing online and virtual worlds, none of these in themselves speaks to collaborative working. Here I look first at personal vs communal use of tablets, and then at examples of research in classrooms using different configurations of tablets. There are clearly factors that discourage the use of tablets for face-toface peer group collaborative working. Some schools design work round 1-tablet-per-child where children carry with them their own personal device that only they can access. The design and features of tablets mark them as personal, owned devices. In one study, when the first iPads appeared in the UK, we tried to encourage communal use at a science fair demonstration by giving families a ‘pass the iPad’ drawing game, where groups had to exchange tablets after each round, compared to a paper version of the same game (Yuill et al., 2013). We had trouble getting some children to let go of ‘their’ device and ended up removing the distinctively-coloured covers of tablets that supported tablets being individuated devices rather than communal resources. Most children will also have the home experience of ‘personal’ technology such as smartphones or tablets managed with content and setting preferences to be individual to the owner. This, and the warnings children will have heard about privacy and monitoring, do not conduce to the idea of sharing devices. As we saw in shared reading (Chapter 1), the physical and cultural

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

89

aspects of tablets can mean a push towards more isolated working (as in the vulture posture adopted by some children reading with a tablet). ‘Personal computing’ in classrooms is not inevitable though. In a parallel to the situation when desktop computers first appeared in classrooms, many schools do not have enough tablets to provide for one per child and may have a tablet trolley that moves from one classroom to another. Sharing with a desktop computer tended to mean distribution of labour, with one person having control of the mouse and one the keyboard (Crook, 1996). For tablets, though, anyone in hand’s reach can have control, and web connectivity means that material can fairly readily be shared across devices too. Sharing a tablet should certainly support good shared awareness. Even so, some actions, such as selecting items from a menu, are single-user operations. That means if collaboration is to happen, there will need to be negotiation of control. Jakonen and Niemi (2020) observed small groups of 9–10-year-old Finnish students working on a collaborative cartoon-drawing app on a shared tablet and noted that control seemed to be commonly achieved through embodied means rather than by verbal negotiation, as we saw earlier in the case of tabletops. Children used their arms and posture for blocking or resisting movements and to gain control. If children were seen as too forceful in blocking, other participants would object verbally. Thus, whether tablets are used 1:1 or many:1, attention to awareness, control and contingency for supporting shared understanding is important. A positive view of using 1:1 tablets to support collaborative working in Hong Kong primary school classrooms is provided by Li et al. (2010), who highlighted the broad aspects of school culture and values that supported collaborative use, as well as observing examples of how children shared resources and co-produced work through Microsoft OneNote software on their networked tablets, compared with a class without tablets. They note that “the group structure in the Tablet PC class was relatively flat […] it enabled students to have equal opportunity to participate in collective decision making and negotiation of meaning” (ibid., p. 179). The differences between 1:1 and many:1 patterns of tablet use were studied by Lin et al. (2012). They reported on the use of ‘Group Scribbles’ software on tablets linked to an IWB, used in two different ways by two classes of 12-year-olds in Taiwan making concept maps. 1:1 groups of four had a tablet each, and shared information on a communal ‘group board’ area shown on each tablet, while many:1 groups of four had just

90

N. YUILL

one shared tablet between them, and managed this by choosing a single member to create and edit a shared concept map on the tablet. For the overall outcome, both conditions showed similar levels of performance, but the ways they worked together differed according to the technology available. The 1:1 groups made more notes than the many:1 groups, but these were of no higher quality. It seemed that the many:1 groups discussed and agreed whether to post each specific note, thus providing some quality control, whereas the 1:1 groups acted more individually to upload notes without consulting their peers. The 1:1 groups also showed greater variability in performance: the authors argued that the constraint imposed on many:1 groups of having just one device pushed them to work together more. Even so, the patterns of interaction appeared to be more equitable among the 1:1 groups, because in most of the many:1 groups, one child (usually the least able) was left out by the other three and, unlike their more capable peers, the least able did not tend to show much improvement. As the authors point out, both set-ups produce learning gains, and the many:1 setup would work better if students were trained in collaborative working, were well-matched in ability and also had a separate shared display to show their work in progress clearly. As this study showed, small groups round a typical-size tablet need to be very close together to see what is happening: this might cause greater shared focus, or disengagement if control isn’t negotiated well. This brings us to the question of screen size, given that screens are increasingly available in different formats and sizes. Zagermann et al. (2016) examined carefully the influence of screen size on pairs of university students working collaboratively for up to 90 minutes on a complex detection task to uncover a hidden plot. Each participant had a typical-size tablet on which they could view documents that were marked by their own identifying background colour. The documents could also be shared with the other user’s tablet, as well as being placed between the pair on a separate larger horizontal display. The large display accommodated many documents that pairs could move round and rearrange to help them reflect on the problem. The large display varied in size, with a diagonal of 27 cm (roughly tablet size), 68.5 cm or 140 cm. For comparison, most of our ShareIT studies on tabletops used an 80 cm diagonal surface. There was no difference in quality of solutions with different tablet sizes, but behaviour differed, meaning collaboration worked differently. The largest surface tended to draw users’ attention to itself, meaning they showed less attention to the partner in comparison with the smaller shared

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

91

display. The largest display also afforded plentiful opportunity for moving documents around, but when participants used smaller displays, they compensated by using search functions and keywords to find material. They showed more playing around and moving objects with the largest surface, perhaps supporting better exploration. For the small surface, pairs tended to be either talking or moving items around, whereas with the large surface, pairs more often talked and moved objects simultaneously. Pairs using large surfaces tended to rate them more highly, and ‘collective sensemaking’ seemed well-supported by the large space. Nevertheless, these adults, probably used to working collaboratively, found equally effective ways to work with a smaller surface: having a personal space and a shared space, whatever its size, was generally appreciated. Size of group will clearly also make a difference. In many of our studies, we deliberately challenged children by using groups of three: in our experience this made it harder to coordinate than in a pair, and possible for one child to be left out. Shared awareness and contingency can also be harder in large groups of five or more (Garte, 2015). In the SPRinG project (Social Pedagogic Research into Group Work: Blatchford et al., 2005), a series of investigations of classroom peer grouping, teachers were encouraged to use pairs for younger children up to about 7 years of age, pairs to fours up to age 11, and groups of four to six at older ages, with the number depending on the particular task requirements. Shared tabletops produced high levels of verbal and action equity even for groups up to eight, in undergraduate students (Westendorf et al., 2017), with these levels of equity happening when the octets organised themselves into four pairs working together, rather than trying to work as a whole group. It is difficult to review how specific tablet apps influence collaboration given the huge range of apps and the very small number that explicitly consider or provide facilities for shared use, other than software specifically designed for collaboration (see Chapter 6 for the Chatlab Connect app we developed). Flewitt et al. (2015, p. 297) noted “whilst commercially produced apps may use state-of-the-art imagery, they are mostly based on outmoded behaviourist and/or transmission theories of learning, where the user practises particular skills and is rewarded with tokens of accomplishment and progress”. Collaboration-focused apps are hard to find, and the clearest examples are in collaborative or cooperative gaming. These seem mostly to be role-play and shooting games, but where they involve in-person interaction, there is little doubt that collaboration is a fundamentally embodied process (see for example https://

92

N. YUILL

youtu.be/WrttGW2KlGI). Flewitt et al. (ibid.) cite staff observations in a UK nursery and reception setting (3–5-year-olds) of children’s patient sharing, turn-taking and shared pleasure in using tablet apps, but there were also instances of friction and problems caused by too many fingers on the screen in unsupervised sharing. Just providing tablets for shared working does not provide structured support for control or contingency of behaviour other than what children bring to the interaction themselves. This brings us to the need for instruction and support that teachers provide for children collaborating through technology.

5.4 Dialogue: Collaborative Discussion Needs Scaffolding There is a large and well-researched body of work in education on the advantages of group discussion in supporting shared understanding and learning, and multiple projects that provide resources on supporting dialogic talk (see also Chapter 4) such as Thinking Together (Dawes et al., 2000) and the SPRinG project on peer collaboration in classrooms (Baines et al., 2007). Mercer and Hodgkinson (2008: Abstract) argue that “… classroom talk… is the most important educational tool for guiding the development of understanding and for jointly constructing knowledge”. Mercer et al. (1999) in their practical programme Talk, Reasoning and Computers (TRAC) provide ground rules for children’s talk (ibid., pp. 98–99): 1. All relevant information is shared; 2. The group seeks to reach agreement; 3. The group takes responsibility for decisions; 4. Reasons are expected; 5. Challenges are accepted; 6. Alternatives are discussed before a decision is taken; and 7. All in the group are encouraged to speak by other group members. The first three rules encourage what the authors term ‘cumulative talk’, where one speaker builds on what another has said, while rules 4 and 5 promote ‘exploratory talk’, focusing on joint understanding (and not, for example, disagreeing then trying to win an argument). The final two rules were developed through observing how groups worked together

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

93

best. The software used in TRAC was primarily a means of delivering material efficiently via a standard shared computer/single mouse set-up, rather than being explicitly supportive of collaboration. Even so, the researchers note some simple factors that should support collaboration. Evidence needed for the task was shown clearly on the screen so that children could point to it and use it as a shared visual resource, there was no encouragement to take turns—as mentioned earlier, this does not always conduce to shared engagement—and there were simple on-screen choices rather than a need for lots of typing. The materials for discussion followed the style used in the Thinking Together research programme, in being complex and requiring reflection. Mercer et al. (2019) argued that, however good the technology support, children needed preparation and training to work collaboratively. Further, supportive technology is of limited use without teachers understanding what is needed to support collaboration—something I hope to have cast light on here. There is good evidence for the benefits of discussion training as a necessary accompaniment to technology support: Wegerif et al. (1998, Study 1) found significantly better understanding in classes given software plus discussion training, compared to a class just given the software. Mercer et al. (1999) found higher scores for both individuals and groups for children given their TRAC dialogic activities with discussion training, compared with children given activities but no training. Presumably, training children in discussion also helps the teacher’s understanding of the importance of dialogic talk. One particularly clever technology to help children (and teachers) reflect on dialogic talk is Talk Factory (Kerawalla et al. 2013) illustrated in Fig. 5.1. A child (or teacher) is instructed to listen to classroom discussion and to press a computer key every time speakers utter particular sorts of talk. A large shared display (IWB) reflects the different types of talk so that everyone in the class sees a simple colour-coded running display of how they are doing. This helps everyone (children and staff) understand and recognise the different types of talk and gives clear shared awareness of how the class is doing. As well as supporting individual children in discussion, ways of structuring groups affect the quality of collaboration. Older work from a social psychology perspective created models of cooperative (sic) learning for classrooms showing that relatively simple measures could achieve better group working. One set of studies in particular assessed individual learning outcomes along with behaviour during groupwork sessions. Lew et al. (1986) compared different class structures over the course of a

94

N. YUILL

Fig. 5.1 Features of Talk Factory showing talk types (right-hand side), total for each type (upper left) and running timeline (lower left) (Source Kerawalla [2015], https://www.sciencedirect.com/science/article/abs/pii/S0883035514000998)

school year for a class of 19 6th graders (11–12 years old), together with a specific focus on four targeted students who were socially isolated and performing below expectations. The students worked best in groups where there was interdependence—rewards were equal for each person in the group and all had a goal to achieve—but notably, for the less-skilled students in particular, they needed some simple tuition and encouragement to behave in ways that supported collaboration: “sharing ideas and information, directing by keeping the group on task and asking taskrelated questions, praising and encouraging the task-related contributions of other members, and checking to make sure everyone in the group understood what was being taught” (ibid., p. 480). The class achieved more on average with the collaborative tuition than a class without, and also showed more of the taught collaborative behaviours (as listed above) during weekly observations of their interactions. The targeted students freely chose working with others more often after the intervention and were also less likely to be rejected by peers in a sociometric measure. The Social Pedagogic Research into Group Work project (SPRinG: Blatchford et al., 2005) is an extensively-researched programme of teaching peer-group collaboration skills in the classroom. An evaluation involving nearly 2000 UK 8–10-year-olds, with a subset of the four-child groups being filmed for close analysis, showed higher engagement, active discussion and joint reasoning and fewer obstructive behaviours than

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

95

found in normal practice. Control groups showed some group reasoning, but the trained groups did better: the training aims to develop trust, sensitivity and respect, and valuing group cooperation rather than blocking others’ contributions. This raises the question of how skilled school students currently are in collaborative group work. The PISA data mentioned earlier tested collaborative problem-solving across 57 nations in 15-year-old students (PISA 2015 Results (Volume V), 2017). This was assessed individually through online choices of text responses within a collaborative problem-solving (CPS) conversation with three virtual peers (validated by comparison with a smaller study with in-person peers). Text messages in a chat box represented the three ‘peers’. One sample task was to plan a school visit from a group of international students. The student had to select one of four possible chat responses to the partners at many different points in the task, showing for example that they could identify and propose useful next steps, consider relationships within the group, build on others’ suggestions and help create shared representations of the problem. There was wide variability between countries, but in every country, girls out-performed boys on this task, whereas boys out-performed girls on individual problem-solving tasks. The report links this with girls’ higher valuing of relationships and boys’ higher valuing of teamwork. Many of the factors identified as linked to higher scores can be addressed. For example, students who said they often talked with their parents did better on the CPS task. The authors also argued that more diverse schools (i.e. with more immigrant students) do better on CPS than less diverse schools, with the emphasis on comparing different perspectives. The PISA study reported on use of technology at school and at home, to identify possible reasons for differences in CPS performance. Intriguingly, schools that used technology the most were lowest in CPS performance, while those with moderate use did best. One of the many possible interpretations of this result is that just putting in more technology on its own is not effective: possibly the ‘moderate tech’ schools were using technology more effectively and sharing more, and the high-tech schools might be relying more on the technology than on how interaction through technology was supported. Given that CPS scores overall had much room for improvement, there is a real role for using technology in better ways to support collaboration. One means of analysing this could be to use data analytics such as the ‘Nonverbal Indexes of Students’ Physical Interactivity’ (NISPI) system (Cukurova

96

N. YUILL

et al., 2018). An intriguing finding in PISA is that students who reported using online social networks more at home scored better on CPS and those reporting playing video games more at home scored lower (and also tended to value relationships less). Girls tended to report more use of online networks than boys did, and the reverse was true for online gaming. Notably, the question about gaming appeared to involve socalled ‘collaborative’ online gaming. These associations on their own don’t prove causal links in either direction, and the large-scale testing couldn’t look at the embodied behaviour I have discussed in this book. However, the findings highlight the need to take very seriously the role of social relationships in technology: the essence of these relationships is created in the to-and-fro of live interaction.

5.5 Cross-Device Collaboration and Classroom Orchestration As we saw, many classrooms generally have two types of technology readily available: usually, a large shared display, and individual devices, whether tablets, laptops or desktop computers. Although we looked at these separately above, they are of course used in combination. The very broadest approach to this issue, taking in the entire design and management of learning activities, is termed ‘classroom orchestration’ and has involved some very inventive experimental technologies (Dillenbourg & Jermann, 2010). I do not cover this wide remit here, but focus on how combining different devices commonly found in classrooms might together support (or detract from) collaboration. Classrooms usually involve a relatively large cohort, and multiple small groups, so we have to consider not just individual awareness, control and constraints, but how these factors work within groups, between one small group and another and across the whole public space. Children certainly recognise the potential for integrated use, as in the drawings in Fig. 5.2 of imagined tablet use in class, including a projection to an IWB, from a study by Mangafa et al. (2016). A review of research into cross-device use (Brudy et al., 2019) suggests that interest has moved from the earlier work on large multiple displays (notably tabletops used with other devices, such as phones) to much more ad hoc, portable set-ups, enabled because of the ubiquity of mobile or semi-fixed internet-enabled devices such as phones, tablets and laptops. In the Brudy et al. categorisation, studying technology for collaboration

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

97

Fig. 5.2 Children’s depictions of tablet use in class with large screen (left) and snack time (right) (Source Mangafa [2021])

in classrooms involves co-located people using technologies that are fixed (e.g. an IWB), semi-fixed (e.g. laptop or desktop, usually plugged in) and mobile (e.g. tablet or phone). These devices will be used in personal space (i.e. one person—one device), in social space (e.g. a small group each with a device, or sharing a device, on a shared desk space) and in public space (the IWB, available to all in the room). Much of this research takes place with adult education in mind, notably in university teaching, and is often focused on engaging students in lecture settings (so with fixed seating), in an attempt to be more interactive, as in apps for polling that rely on web connections. School classrooms typically involve much more fluid movement and our interest here is in collaborative group work rather than lecturing. Unfortunately, work in HCI suggests considerable barriers when using multiple devices. In particular, there seems to be a ‘legacy bias’: when people have multiple ‘personal’ devices such as tablets to work with in small groups, it seems that they sometimes continue to use them individually, rather than adapting their work pattern to make use of the new capabilities (Plank et al., 2017). Further, many teachers will recognise only too well the frustrating organisational factors such as lack of sufficient training, password management, keeping items charged, firewall restrictions, outdated or incompatible software versions, problems with bringing in your own device, and the shared ownership of large devices that may create a lack of individual responsibility—some of which negatively affected even a project as well-resourced as the use of shared interactive displays in the NASA Mars Exploration Rover mission (Huang et al., 2007). Brudy et al. (2019, p. 12) point out that “commercial attempts at cross-device computing are limited to a single user managing their personal device ecology within a particular manufacturer’s ecosystem,

98

N. YUILL

with little support for real collaborative activities”. In classrooms, as we have seen, large shared surfaces can push towards whole-class teaching, rather than supporting collaboration in small groups. One vision for collaborative multi-device working is a digital version of a group working together with multiple pieces of paper documentation that can be easily shared, passed round and annotated. Marquardt et al. (2012) describe an experimental system whereby users of personal devices could transfer material between these and a larger shared space such as an IWB with a ‘collaborative handoff’ method: person A starts a gesture of moving an item from their personal device and person B continues this to move the item to B’s own device. Both users are aware, both have consent when control is handed over, and the action can be reversed. These possibilities only appear for people physically close to each other, to avoid confusion of a whole class full of devices being shown for potential transfer, though the authors suggest other systems that might operate with duplicated content across multiple devices.

5.6 Research Designs for Cross-Device Collaboration There have been several experimental systems with cross-device working, designed for classrooms. Higgins and colleagues, in the SynergyNet project, developed an experimental classroom environment combining a large IWB and small-group multi-touch tables in a full experimental classroom at Durham University, to which schoolchildren were invited for lessons. In multiple studies, they reported superior collaborative working compared to paper-based work, citing mechanisms such as better joint attention, more inter-student interaction than student–teacher interaction and more equitable control with tabletop devices (e.g. Mercier et al., 2017), all mechanisms that we have seen in previous chapters to be capable of support through shareable technology. Despite the success of this research set-up, this type of equipment is not available generally in schools. Liu and Kao (2007) compared group work in university students working in a classroom either with just individual tablets, tablets that were networked, and tablets plus a shared wall-mounted screen. The first scenario meant that students could only lean over to see others’ work, the second allowed private-to-private transfer of information (one person could send their responses to the screen of another person in the

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

99

group) and the third allowed private-to-public, such that each person’s screen could be displayed together on the shared screen. Shared displays produced more equal participation of each group member: this was made possible because students could easily point to the shared display to explain differences, were more aware of partners’ actions and generally had a more mutual understanding of the workspace. Some multi-device systems have been tested in real classroom settings, although none seems to have led to permanent implementation. Nussbaum et al. (2009) developed a structured and teacher-managed approach, Collpad, with each student having a separate device, and the teacher a controlling device, using the Mercer et al. (1999) dialogic talk model described earlier. The authors implemented this in classrooms in the UK and Chile and reported positive responses to the trial. Kharrufa et al. (2013) took a different approach, with looser, more distributed control, using multiple non-networked digital tables over 6 weeks with two Year 8 (ages 12–13) classes. The authors had two cross-curricular applications, a group-work thinking tool, ‘digital mysteries’ and a collaborative writing app. They highlighted the importance of making work processes visible for both students and teachers, afforded by the tabletop. The difficulties lay in two main aspects. First was the inability to identify individual users: both teachers and students wanted ways to know how much each child had contributed. Second, the teacher needed more control, for example freezing the tables to enable whole-class work without distraction, and more flexibility, given that children worked at different speeds on the task. Some approaches to these difficulties were addressed by Stefan Kreitmayer (2015) in a series of three studies, all using large shared wall displays and several tablets, in classroom-size groups. In each case, the teacher could use a central control device to pause, and to send new information between the small-group and whole-class devices, moving between whole-class and small-group activity. In the 4 Decades design, two teams of adults undertook a climate management challenge. There were three large wall displays and eight tablets. The second implementation, UniPad, was for classrooms of pre-university students to learn about personal financial management. This used the classroom IWB and one tablet per group of around seven students. The final study, Comfy Birds, involved my own lab, with a group of eight autistic primary-age children in a language comprehension activity, using one tablet per pair and the class IWB. The teacher had a tablet that represented each stage of the

100

N. YUILL

Fig. 5.3 Screens for the Comfy Birds app (Note [left to right] small-group tablet screen where a bird [in the small group’s own colour] leaves the sofa to be placed on the chosen word ‘salad’; birds of different colours from each of 4 small-group tablets fly up into the large shared display; and the shared display shows all four different groups’ answers for comparison)

task, and enabled each group’s tablets to be made inactive after the smallgroup discussion, to shift attention to whole-class work, moving material from each of the tablets to be represented together on the shared display (Fig. 5.3). In each case, Kreitmayer considered how participants communicated information, managed shared control and developed shared understanding. The teacher’s tablet controlled movement of material between the tablets and the main displays, giving clear structure to the activities, which shifted between whole-class attention to the large display (when small-group tablet displays were immobilised) and small-group work round a single tablet, which the small group could control. A particularly effective feature for managing attention in Comfy Birds emerged from each group having a distinctive coloured bird as their avatar. When control moved from the small-group devices to the whole-class setting, each group’s bird fluttered up from the top border of the tablet to appear flying upwards from the lower edge of the shared IWB, with audible cheeping that beautifully choreographed the children’s gaze up from the small tablet to the shared display, securing the shared attention of the class. The shared display represented each group’s answers, so that groups could compare their answer (potentially anonymously) with that of other groups, and the teacher could support a whole-class discussion of the merits of different responses. The shifts between small-group and whole-class working, by having tasks that were completed in a series of ‘rounds’, gave some freedom for groups to work at their own pace, but also maintained regular whole-class check-ins and comparisons, so that groups could learn from each other. The teacher was free to walk

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

101

round the small groups to provide support and reflection tailored to each group, and could then encourage reflection on, and comparison of different solutions across groups. Kreitmayer’s aims were to use easilyavailable classroom technology on a web browser, without the need for installing software or for learning of complex new software.

References Baines, E., Blatchford, P., & Chown, A. (2007). Improving the effectiveness of collaborative group work in primary schools: Effects on science attainment. British Educational Research Journal, 33(5), 663–680. Blatchford, P., Galton, M., Kutnick, P., & Baines, E. (2005). Improving the effectiveness of pupil groups in classrooms (Final Report to ESRC, L139 25, 1046). Blundell, R. (n.d.). How schools spend their money on IT . Retrieved December 1, 2020, from https://commercial.co.uk/schoolspendingedtech/. Brudy, F., Holz, C., Rädle, R., Wu, C.-J., Houben, S., Klokmose, C. N., & Marquardt, N. (2019). Cross-device taxonomy: Survey, opportunities and challenges of interactions spanning across multiple devices. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–28). ACM. Crook, C. (1996). Computers and the collaborative experience of learning. Psychology Press. Cukurova, M., Luckin, R., Millán, E., & Mavrikis, M. (2018). The NISPI framework: Analysing collaborative problem-solving from students’ physical interactions. Computers & Education, 116, 93–109. Davidsen, J., & Vanderlinde, R. (2016). ‘You should collaborate, children’: A study of teachers’ design and facilitation of children’s collaboration around touchscreens. Technology, Pedagogy and Education, 25(5), 573–593. Dawes, L., Mercer, N., & Wegerif, R. (2000). Thinking together: A programme of activities for developing thinking skills at KS2. Questions Publishing Company. Dillenbourg, P., & Jermann, P. (2010). Technology for classroom orchestration. In M. Khine & I. Saleh (Eds.), New science of learning (pp. 525–552). Springer. Flewitt, R., Messer, D., & Kucirkova, N. (2015). New directions for early literacy in a digital age: The iPad. Journal of Early Childhood Literacy, 15(3), 289– 310. Garte, R. R. (2015). Intersubjectivity as a measure of social competence among children attending Head Start: Assessing the measure’s validity and relation to context. International Journal of Early Childhood, 47 (1), 189–207.

102

N. YUILL

Heinrich, P. (2012). The iPad as a tool for education: A study of the introduction of iPads at Longfield Academy, Kent. NAACE. Higgins, S. E., Mercier, E., Burd, E., & Hatch, A. (2011). Multi-touch tables and the relationship with collaborative classroom pedagogies: A synthetic review. International Journal of Computer-Supported Collaborative Learning, 6(4), 515–538. Higgins, S. (2010). The impact of interactive whiteboards on classroom interaction and learning in primary schools in the UK. In M. Thomas & E. Cutrin-Schmid (Eds.), Interactive whiteboards for education: Theory, research and practice (pp. 86–101). IGI Global. Higgins, S., & Siddle, J. (2016). New technology. In D. Wyse & S. Rogers (Eds.), A guide to early years and primary teaching. Sage. Hornsby, G. G. (2010). Computer support for children’s collaborative storymaking in the classroom. In K. Mäkitalo-Siegl, J. Zottmann, K. Frederic, & F. Fischer (Eds.), Classroom of the future (pp. 115–139). Brill Sense. Huang, E. M., Mynatt, E. D., & Trimble, J. P. (2007). When design just isn’t enough: The unanticipated challenges of the real world for large collaborative displays. Personal and Ubiquitous Computing, 11(7), 537–547. Jakonen, T., & Niemi, K. (2020). Managing participation and turn-taking in children’s digital activities: Touch in blocking a peer’s hand. Social Interaction. Video-Based Studies of Human Sociality, 3(1). Kerawalla, L. (2015). Talk Factory generic: Empowering secondary school pupils to construct and explore dialogic space during pupil-led whole-class discussions. International Journal of Educational Research, 70, 57–67. Kerawalla, L., Petrou, M., & Scanlon, E. (2013). Talk Factory: Supporting ‘exploratory talk’ around an interactive whiteboard in primary school science plenaries. Technology, Pedagogy and Education, 22(1), 89–102. Kharrufa, A., Balaam, M., Heslop, P., Leat, D., Dolan, P., & Olivier, P. (2013). Tables in the wild: Lessons learned from a large-scale multi-tabletop deployment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1021–1030). ACM. Kreitmayer, S. (2015). Designing activities for collaboration at classroom scale using shared technology. Doctoral Dissertation, The Open University. Kyriakou, A., & Higgins, S. (2016). Systematic review of the studies examining the impact of the interactive whiteboard on teaching and learning: What we do learn and what we do not? Preschool and Primary Education, 4(2), 254– 275. Lechelt, Z., Rogers, Y., Yuill, N., Nagl, L., Ragone, G., & Marquardt, N. (2018). Inclusive computing in special needs classrooms: Designing for all. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 11–22). ACM.

5

COLLABORATIVE TECHNOLOGY IN THE CLASSROOM

103

Lew, M., Mesch, D., Johnson, D. W., & Johnson, R. (1986). Positive interdependence, academic and collaborative-skills group contingencies, and isolated students. American Educational Research Journal, 23(3), 476–488. Li, S. C., Pow, J. W. C., Wong, E. M. L., & Fung, A. C. W. (2010). Empowering student learning through Tablet PCs: A case study. Education and Information Technologies, 15(3), 171–180. Lin, C., Wong, L., & Shao, Y. (2012). Comparison of 1: 1 and 1:m CSCL environment for collaborative concept mapping. Journal of Computer Assisted Learning, 28(2), 99–113. Liu, C. C., & Kao, L. C. (2007). Do handheld devices facilitate face-toface collaboration? Handheld devices with large shared display groupware to facilitate group interactions. Journal of Computer Assisted Learning, 23(4), 285–299. Major, L., Warwick, P., Rasmussen, I., Ludvigsen, S., & Cook, V. (2018). Classroom dialogue and digital technologies: A scoping review. Education and Information Technologies, 23(5), 1995–2028. Mangafa, C., Moody, L., Woodcock, A., & Woolner, A. (2016). The design of guidelines for teachers and parents in the use of iPads to support children with autism in the development of joint attention skills. In A. Marcus (Ed.), Design, user experience, and usability: Novel user experiences (pp. 178–186). Springer. Marquardt, N., Ballendat, T., Boring, S., Greenberg, S., & Hinckley, K. (2012). Gradual engagement: Facilitating information exchange between digital devices as a function of proximity. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces (pp. 31–40). ACM. Mercer, N., Hennessy, S., & Warwick, P. (2010). Using interactive whiteboards to orchestrate classroom dialogue. Technology, Pedagogy and Education, 19(2), 195–209. Mercer, N., Hennessy, S., & Warwick, P. (2019). Dialogue, thinking together and digital technology in the classroom: Some educational implications of a continuing line of inquiry. International Journal of Educational Research, 97 , 187–199. Mercer, N., & Hodgkinson, S. (2008). Exploring talk in school: Inspired by the work of Douglas Barnes. Sage. Mercer, N., Wegerif, R., & Dawes, L. (1999). Children’s talk and the development of reasoning in the classroom. British Educational Research Journal, 25(1), 95–111. Mercier, E., Vourloumi, G., & Higgins, S. (2017). Student interactions and the development of ideas in multi-touch and paper-based collaborative mathematical problem solving. British Journal of Educational Technology, 48(1), 162–175.

104

N. YUILL

National Literacy Trust. (2019). Lack of access to technology in schools is holding pupils back. Retrieved May 10, 2021, from https://literacytrust.org.uk/ news/lack-access-technology-schools-holding-pupils-back/. Nussbaum, M., Alvarez, C., McFarlane, A., Gomez, F., Claro, S., & Radovic, D. (2009). Technology as small group face-to-face collaborative scaffolding. Computers & Education, 52(1), 147–153. PISA. (2017). 2015 results—Volume V: Collaborative problem solving. Retrieved May 10, 2021, from https://doi.org/10.1787/9789264285521-en. PISA. (2019). PISA 2018 results: Combined executive summaries. Retrieved May 10, 2021, from https://www.oecd.org/pisa/Combined_Executive_Summar ies_PISA_2018.pdf. Plank, T., Jetter, H.-C., Rädle, R., Klokmose, C. N., Luger, T., & Reiterer, H. (2017). Is two enough?! Studying benefits, barriers, and biases of multi-tablet use for collaborative visualization. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4548–4560). ACM. Promethean. (2020). The State of Technology in Education Report. Rogers, Y., & Lindley, S. (2004). Collaborating around vertical and horizontal large interactive displays: Which way is best? Interacting with Computers, 16(6), 1133–1152. Rogers, Y., Shum, V., Marquardt, N., Lechelt, S., Johnson, R., Baker, H., & Davies, M. (2017). From the BBC micro to micro: Bit and beyond. Interactions, 24(2), 74–77. Wegerif, R., Mercer, N., & Dawes, L. (1998). Software design to support discussion in the primary curriculum. Journal of Computer Assisted Learning, 14(3), 199–211. Westendorf, L., Shaer, O., Varsanyi, P., van der Meulen, H., & Kun, A. L. (2017). Understanding collaborative decision making around a large-scale interactive tabletop. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW), 1–21. Yuill, N., Rogers, Y., & Rick, J. (2013). Pass the iPad: Collaborative creating and sharing in family groups. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 941–950). ACM. Zagermann, J., Pfeil, U., Rädle, R., Jetter, H.-C., Klokmose, C., & Reiterer, H. (2016). When tablets meet tabletops: The effect of tabletop size on aroundthe-table collaboration with personal tablets. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5470–5481).

CHAPTER 6

Autism and Technology for Collaboration

Abstract Collaborative technology design has differed for autistic children who are verbally expressive and those who are minimally verbal and with learning disability. A focus on constraining actions and tutor guidance is contrasted with more open-ended exploration of real or virtual space to support close interaction. Providing material of intrinsic interest to individual participants seems particularly crucial in autism. The case of autism challenges us to consider what collaboration means, foregrounds the role of bodily movement, and highlights a theoretical division between representational accounts and embodied perspectives for ‘participatory sensemaking’. Recent research on movement synchrony suggests this may play an important role in collaboration. Keywords Constraints · Exploration · Language · Embodied perspective · Participatory sensemaking

Autism is a crucial lens through which to view technology for collaboration, for multiple reasons. First, there is a widely-held assumption that the ability to collaborate is impaired or absent in autism. Second is the common idea that autistic people have an affinity with technology: this makes it seem that technology should be an important tool in supporting collaborative activity, taking us back to my central point that digital © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3_6

105

106

N. YUILL

devices can be designed to support interaction, not just ‘personal computing’. Third, autism is a field in which people have cogently challenged assumptions about what constitutes successful social interaction. Fourth, language, notably dialogue, has occupied a central role in accounts of collaboration (see especially Chapter 5): given that many children with autism have minimal verbal communication, this raises fundamental questions about the role of language in collaborative interactions. The model of collaboration through conversation is underpinned by the Vygotskian approach described in Chapter 1, where both inner speech and dialogue with others are seen as engines of cognitive and social development. If children communicate in ways other than spoken language, what could collaboration look like and how might we support it, if not through language? This chapter is in two distinct parts. First, I look at technology designed to support collaboration in more verbal autistic children, informed by the Co-EnACT collaboration framework already used, and then I review the smaller field of work with children who use little or no spoken language, including the role of tangible technologies. I have divided each section into technologies that work through constraining and guiding, and those that operate through more open-ended exploration. The final section sketches out some of the conceptual issues about how to define collaboration in light of the challenges posed by differences in interaction in autism, and discusses the role of talk and of embodied interaction in defining collaboration. Before setting out on the technology review, there are some issues of definition to set out first.

6.1 Definitional Issues: Autism and Collaboration Children with a primary need of autism are estimated to make up nearly one-third of the population of children who have a statement of special needs (education, health and care plan) in English schools (Department for Education, 2019) and 1.4% of children in US schools are reported as autistic (National Center for Education Statistics, 2019). The global percentage of autistic people who also have a learning disability is usually estimated at about 50% (Russell et al., 2019). Definitions of autism, and diagnostic criteria within the medical model of autism, are subject to change, and there is wide acceptance of it as a heterogeneous spectrum of conditions (Happé & Frith, 2020; Onaolapo & Onaolapo, 2017).

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

107

Nonetheless, characterisations of autism all feature the presence of atypical (and often apparent lack of) social interest and social behaviour, and difficulties in communicating with others. This suggests collaboration will be a challenge. My own work has been informed by working and designing with and for autistic people, and motivated by the belief that collaboration is a crucial process driving social and cognitive development, hence the need to find ways of supporting it in groups where it seems to be different, or not apparent. Terminology is a live issue in the field of autism. I use the phrase ‘autism spectrum condition’ (ASC) rather than disorder, and occasionally switch between person-first and condition-first language, in recognition of the differing arguments put forward within the autism community about the language used to describe autism and the conception of it as a difference or a deficit (Kapp et al., 2013). The research I mention uses a wide variety of terms, some of which have caused controversy. I sometimes draw comparisons with typically-developing (TD) children. I also use the term ‘learning disability’ in relation to some autistic children (LDASC) in discussing work with children and young people who have reduced ability to understand complex information, and often, limited or no verbal communication. This term includes children who are both low-verbal and learning-disabled and I have used a variety of terms such as low- and highverbal. This does not address children who may be intellectually very able but who do not communicate verbally. The argument in this chapter also is not intended to deny the option for children to disappear into a solitary screen world for comfort sometimes: Wood (2019, p. 99) quotes an autistic boy: “I’m rather interested in my LEGO® and my computer…I feel somewhat detached from the world around me when I do these things”. Why might collaborative activity be important in autism? I argued in Chapter 1 that collaboration is an important mechanism in development generally, as well as being useful in its own right. Moll and Tomasello (2007) encapsulated the idea of collaboration as a mechanism of change in their Vygotskian intelligence hypothesis. This is the idea that regular participation in cooperative interactions through development is a mechanism through which children develop social cognitive abilities. For example, playing a chasing game enables role-switching between chaser and chased, while shared reading can involve turn-taking. Each partner in collaboration can take on complementary roles, which some view as leading to the development of ‘dialogic representations’—being able to

108

N. YUILL

see something from your own perspective as well as from the viewpoint of another person. The philosophical literature into joint action emphasises that people acting together first engage by making an implicit commitment to a joint goal, helping them to achieve those goals. Joint action is seen as the foundation for behaving according to wider collective norms and social institutions, such as moral systems. Hence, collaboration is seen as a crucial foundation for social participation as well as for learning. Finding ways to support collaboration in autistic children involves creating opportunities for learning and development that could otherwise be inaccessible. Collaboration, as defined in the Co-EnACT framework (Chapter 1), involves developing and dynamically maintaining the defining feature of shared understanding through shared engagement and attention leading to shared control, which supports sequences of contingent action. This framework implicitly underlies the following review of work in more verbal children, but is applied only more loosely to the work with learning-disabled children. There are different approaches taken in how to use technology to support children in developing collaborative behaviour. One approach involves explicitly teaching behaviour and strategies, largely through verbal guidance and explanation. Another is to design environments where children can explore possibilities of acting with others, in ways that are more or less tightly shaped, or constrained, towards encouraging collaboration (see Yuill & Rogers, 2012). For example, we could tell children to stand in line as playtime ends, and to come into the school one at a time, or we could put in a turnstile that only enables one person to enter at a time. The first encourages children to have an explicit, verbalisable understanding of why it is sensible to behave according to the one-person rule, and to be able to generalise that behaviour to other situations, to judge others’ behaviour according to the same rule and to explain why, through explicit verbal reasoning. In the second scenario, by designing the environment to support one form of behaviour and constrain others, we give the child practical experience of turn-taking to support an implicit pattern of behaviour that could generalise to other situations, even if the child cannot explicitly verbalise this. Yet another approach is to design technologies to support free-ranging exploration, for children to discover for themselves different ways of acting with others, perhaps using rewarding effects to encourage some pathways more than others. The review below addresses technologies that guide and

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

109

constrain separately from those that encourage exploration, in turn for more- or less-verbal children. Because of the resources needed to run such studies, sample sizes tend to be small, and durations of studies short.

6.2 Guiding Collaboration with Verbally-Expressive Autistic Children This category has probably generated most projects involving collaborative technology in autism, presumably based on the assumption that more-verbal children on the autism spectrum can be supported to collaborate both by explicit teaching and guidance and by amplifying the same factors used to support collaboration in typically-developing groups. Just as in the general literature, tabletops have been a popular device, because of the ways they can support co-working, as explained in earlier chapters, for example with the space and control mechanisms to support awareness and shared control. Approaches in this vein often use constraints to support more contingent behaviour. The multi-site COSPATIAL project (Communication and Social Participation: Collaborative Technologies for Interaction and Learning) used such a model, both with multi-touch tables and in virtual environments, to support collaboration and communication among relatively high-verbal autistic children. Thus, Gal et al. (2009) developed StoryTable, asking three pairs of autistic 8–10-year-olds to co-create story narratives. Choices such as background and topic had to be selected by both partners acting simultaneously, using the multitouch feature. The authors reported some increases in children’s social initiations and shared play after experience with the game, and some of the children showed lower frequencies of repetitive behaviour than when playing a less structured marble maze construction game. Bauminger-Zviely et al. (2013) used a more didactic approach, together with cognitive behavioural therapy, to teach collaboration skills using multi-touch surfaces with 22 8–10-year-olds. Their six-session JoinIn game required skills such as joint planning and resource-sharing, and No-Problem taught conversation skills. The authors reported greater reflective understanding of collaboration in students, although changes in general social behaviour were less clearly evidenced. Another project partner, Battocchi et al. (2009) constructed a particularly challenging Collaborative Puzzle tabletop game, which required pairs of children to drag jigsaw puzzle pieces together into the correct locations, presumably requiring high levels of movement synchrony (see Sect. 6.5). They

110

N. YUILL

compared 35 pairs of 9-year-old TD children, and eight pairs of autistic children aged 8 to 18, playing the ‘enforced collaboration’ with joint movement, and a free-play condition where players could move pieces independently. TD children spent much longer in the ‘enforced’ condition, and showed higher rates of interaction (number of interactions divided by time), although the rate of coordinated moves (such as tapping a piece to signal to the partner to attend) was no greater than in the freeplay version. The autistic children also spent longer and showed more simultaneous activity in the enforced version (not surprising given this was enforced) but, unlike the TD group, also showed a higher rate of coordinated moves in this version. The authors suggested this reflected these children’s difficulties in the coordination and negotiation required. Parallel with these COSPATIAL tabletop projects were activities designed in collaborative virtual environments (CVEs). Such ‘disembodied’ environments enable users to meet virtually, as avatars, and have been popular in design for autism, because they are seen as offering non-threatening simulations of real environments and interactions at a distance, which could feel more comfortable than closer encounters for some people. Because of this book’s slant towards embodied aspects of interaction, I do not address the many debates around virtual reality, but Parsons (2015, 2016) provides cogent arguments about its use with autistic individuals. Given my focus on interpersonal action, I have also not covered the use of robots as interaction agents (e.g. see Alcorn et al., 2019). We’ve already seen in Chapter 3 how smaller multi-touch surfaces (tablets) could in principle support shared activity, but their size, their ‘personal’ design and the dominance of single-user apps can work against this, especially in light of evidence that autistic children find tablets particularly attractive (Lorah et al., 2015). Boyd et al. (2015) reported that a group of 8–11-year-old autistic children expressed their clear dislike of the close physical proximity necessitated by sharing a small tablet. Commercial examples of tablet activities explicitly designed for more than one user are rare, compared with the possibility of two people sharing a game designed for one, but Boyd et al. reported some evidence of the potential for shared working in multi-user apps. They investigated a suite of collaborative games, Zody, with their four pairs of 8–11-yearold autistic participants in a special education class who used some verbal communication. The Zody activities included serial, synchronised and symmetrical actions, with some games encouraging division of labour,

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

111

such as throwing fruit at, or running after a gopher, others tapping action synchrony, such as tilting the tablet to get a character through a maze, or calling on coordinated interaction, such as a treasure-digging game where one child controls the vertical position on a grid and the other the horizontal, with simultaneous button-pressing needed. The authors noted that as children became more familiar with each other over the four weeks of the study, there were changes in how they coordinated, from children counting aloud ‘1–2–3-go’, either one leader or both together, to nonverbal means such as a glance or a touch on the arm. Some games went beyond a single turn-taking choice to include actions where players were mutually dependent on each other over time. The authors note that one child could dominate and there seemed to be no design features to avoid this happening. In summary, this literature suggests that approaches using similar features as we’ve seen with TD children can be used with verbal autistic children, although the evidence for generalisation to other interactions seems not to be especially compelling, even in the rare event that the study resources allow the technology to be made available over a period of weeks. I did not find work that addressed the nature of dialogue surrounding tabletop activities, with the sort of analysis covered in Chapter 5. There is little description of the amount and type of help provided by teaching staff, but the above studies all involved such support. Questions also remain about whether and when mixed groups of autistic and TD children might be effective. It would be useful to have more research contrasting one technology design with another, in order to understand what features of a technology design are effective in supporting collaboration.

6.3

Encouraging Collaboration Through Exploration, with Verbally-Expressive Autistic Children A different approach has been adopted by a group of researchers in Barcelona, with their Kinect-based collaborative interaction game Pico’s Adventure, and an Augmented Reality game, Lands of Fog (Mora-Guiard et al., 2017). Both games included elements of open-ended exploration, so as to foster children’s autonomy in approaching others, while simultaneously encouraging collaboration through providing rewarding

112

N. YUILL

effects if players acted together (rather like the flying jackets described in Chapter 3). In Pico’s Adventure, the authors argued for the value of letting children explore in an open-ended way “to understand the extent of their control over the virtual environment… thus avoiding a chaotic introduction to a virtual environment filled with multiple users” (Crowell et al., 2019, p. 100). The game moves, across a sequence of sessions on different days, from the child socialising with just the virtual character Pico, as a low-anxiety context, and then being offered the possibility of interactions that require help from a parent. This is followed by opportunities for the child to play collaboratively with the adult, and then with an autistic child as partner. Crowell et al. reported notable success in increasing interaction between the children and their game partner, for a group of 10 verbal autistic children. Lands of Fog takes place in a 6-m-diameter, floor-projected virtual world covered in a dense virtual fog, with objects revealed through peepholes. Children have a physical butterfly net with the ability to catch virtual fireflies. The game then moves on to offer effects that make joint action attractive, through the possibility of introducing new creatures when interacting at close enough proximity with another player’s creature, and animations that are only activated when two players work together. Further features of the game such as surprising and unexpected responses from the system were designed to support shared engagement. Ten verbal autistic children aged 10 to 14 used the system over three separate sessions. Social actions by the children, such as initiations, shared actions and specific joint actions, such as creating new creatures, were only possible through the coordinated action of two players, and these actions increased in frequency over the sessions. Further development of this work has extended the investigation of mechanisms of change by comparing with non-digital interventions and using computer log data and physiological measures (Crowell et al., 2020). The authors contrast the strong constraints of Pico’s Adventure with the more exploratory approach of Lands of Fog: the former required more intervention from a therapist to prevent frustration, while the latter enabled more creative exploration, and hence, they argue, fostered greater autonomy. It would be valuable to know whether supporting autonomy would lead to more spontaneous collaborative interaction outside the game context. Open-ended approaches have also been used in the more readilyavailable technology of tablet games. Hourcade et al. (2013) created an

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

113

Open Autism Software suite, developing a set of tablet games with autistic children across mainstream and special school settings. The games were designed to be open-ended (i.e. with multiple ways to interact) and errorfree (i.e. with no right or wrong answers). The authors explain their approach is to “entice children to engage in positive face-to-face interactions… to help children practice social skills in activities they enjoy, where face-to-face interactions are desirable” (ibid., p. 3199). The games included tools to support collaborative storytelling through drawing, collaborative music-making, joint puzzle-solving and emotion-matching. The designers compared these to non-digital versions of similar activities in a group of eight autistic 10–14-year-olds with varying levels of verbal ability and social skills. The digital activities produced more verbal interactions than the non-digital, as well as more frequent physical interactions: these latter included turn-taking and joining in with another child’s turn. The authors reflect that the attraction of technology helped children to feel more confident and less anxious, supporting high levels of engagement, and meaning that shared enjoyment could arise naturally. Several of the apps seem to involve turn-taking, which might push more for cooperative than collaborative interaction (see Chapter 1), but the Untangle app, in which players arrange interconnected dots so that no lines cross, clearly supports simultaneous coordinated action. Constraining control is difficult with tablets, which operate as user-unaware, simultaneous multitouch surfaces, as discussed in Chapter 3: in Sect. 6.5 below I discuss a strategy of using two Wi-Fi-connected tablets to identify each user’s touch.

6.4

Encouraging Collaboration Through Exploration, with Minimally-Verbal Autistic Children Clearly, research involving minimally-verbal children will rely more on coordinated action and gesture than on discussion: I address some theoretical implications of this distinction in Sect. 6.6. Autistic children can sometimes appear oblivious to others, not acting contingently on others’ behaviour or sharing attention. Gauging shared understanding is difficult to assess because of the reliance of this concept on verbal mediation. Some researchers working with minimally-verbal autistic children have understandably argued strongly for an open-ended exploratory approach, in

114

N. YUILL

contrast to the more constrained approaches discussed in Sect. 6.2. Openended approaches often begin with the aim of drawing children into an activity alongside peers, relying on the kinds of honeypot effects and entry points discussed in Chapter 2. Use of tangibles has been particularly common in this group, unsurprising given that interaction with digitally-augmented physical objects can operate as much through physical movement as through language. Keay-Bright and Howarth (2012) worked with LDASC children and young people using Reactickles software, which involves creating sensory experiences, such as lighting effects, through manipulating different controlling objects (e.g. keyboards, microphone, touchscreen and mobile devices). They argued for “a vision of interaction as a playful, emergent experience rather than a predefined routine for maintaining order… to arouse curiosity and to make interaction irresistible” (ibid., pp. 130– 131). Some of this work focused on interactions of an individual child with objects, but also involved children playing with the tangibles in pairs with an autistic peer. The authors describe episodes where children acted together to create visual effects, or checked to see whether others had noticed effects they had made, where beforehand, some had shown little attention to the reactions of their peers. Other studies with tangibles and children with LDASC also suggest that an open-ended approach can support shifts towards closer engagement. In Chapters 1 and 4, we looked at whether free play with an Augmented Knights’ Castle (AKC) toy could support cooperative play in TD children. A study using the same playset with autistic children in a special school (Farr et al., 2012) suggests that exploration with physical objects can support routes to greater social engagement. Four triads of autistic children aged 9 to 13 at a special school showed more social forms of play when playing with the augmented than the non-augmented version of the toy. Bearing in mind caution given the small sample size, the pathways to engagement and the transitions between play states were different in ASC children using augmented and non-augmented versions, as well as being different from the patterns we had found in TD children. In the study with TD children (Yuill et al., 2014), parallel play acted as a pathway either to greater joint engagement (with the AKC) or less engagement (with the KC). Parallel play was not such an important transition point for the autistic children. For these children, when playing with the unaugmented playset, both cooperative and onlooker play were very likely to be followed by solitary activity. In contrast, with the AKC,

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

115

cooperative play led as often to onlooker play as to solitary activity, and cooperative play itself could be readily reached through a variety of other states—onlooker, parallel and associative play (a more engaged form of parallel play). Disengagement often switched to looking on, sometimes as a result of a sound being played, perhaps similar to the engaging role of sound for TD children, for whom it led more directly to cooperative play. The AKC provides multiple entry points (many figures that can be chosen) and a clear link of cause and effect (making a sound when moving to a specific location), both qualities shared by another tangible toy we investigated. Topobo consists of parts of a creature (thorax, tail etc.) that users can clip together and using a simple programmable connecting piece, record and save particular patterns of movement for the creature produced. Farr et al. (2010) compared two groups of three autistic boys aged 8 to 11 playing with either Topobo or a set of plain LEGO® pieces. Autistic children playing with Topobo showed more parallel and less solitary play than in their interactions with the construction bricks. Topobo creature parts were plentiful, enabling each child to connect pieces on their own, but there was only a single connecting piece between three children, meaning that negotiations or collective actions were needed if a child wanted to create and record movement. These studies suggest that tangibles providing attractive auditory or visual effects in predictable ways do support closer engagement of autistic children with their peers and can engender more social forms of play and exploration. The contrast of Topobo with LEGO® might seem surprising to those who know of the success of LEGO® therapy (Legoff & Sherman, 2006), but it’s important to note that the bricks used in the Topobo study were provided just for unstructured free play. LEGO® therapy involves a careful structure with ground rules, in which children take specific, interlocking roles, such as supplier, engineer and builder. Evidence of therapy over several months suggests increases in social interaction and social contact, as well as a broader range of reported social skills over years. This emphasis on structure brings us to the final section of research with minimallyverbal autistic children that involves strong guidance and constraints.

6.5

Guiding Collaboration Through Constraint in Minimally-Verbal Autistic Children

In a systematic review of technology for collaboration in autism, SilvaCalpa et al. (2020) highlighted the lack of guided technology designs

116

N. YUILL

suitable for autistic children with little or no verbal communication. This same research group from Rio de Janeiro described two systems they built with such groups in mind, both using high levels of structured guidance. ComFiM (Ribeiro & Raposo, 2014) uses images based on Picture Exchange Communication System (PECS), commonly used by less-verbal children for communication. In a small-scale study, two pairs of autistic children, of 5 years and 11 years respectively, each held a tablet in front of a large screen. In weekly sessions over nine weeks, the children were guided through increasingly complex steps. First, children were shown how to exchange a message with the tutor avatar on the large screen. The tutor then mediated a communication between the two child players, asking player 1 to request an item from player 2, and then reversing their roles. Finally, the two players requested and exchanged objects directly, in the service of a common goal. The authors gave detailed descriptions of each child’s behaviour with the system over the extended time period, and argued that it guided children to achieve some level of collaborative communication. CoASD is a tabletop game for pairs, involving guiding a car round a route into a garage, through various problems such as lack of fuel and holes in the road (Silva-Calpa et al., 2018). Players have to communicate with each other to find help and to coordinate actions to solve problems (e.g. one needs to hold a bridge over a gap while the other drives the car over it). Analysis of play sessions involving seven autistic boys ranging from 5 to 14 years of age found that they exhibited a range of social behaviours such as looking at the other, requesting, responding, encouraging and jointly celebrating. As no control condition is reported, it’s not clear how much these behaviours were novel for the children, but the authors report positive comments by the children’s therapists about the system’s support for collaboration. Opportunities for autistic children to initiate joint attention can also be created through failure of technology, as described by Alcorn et al. (2014) in the ECHOES project to design intelligent virtual environments supporting social communication in autism. Occasional unpredictable software errors produced violations of expectation, and this could prompt children to comment, point or share laughter with the supervising researcher, bringing shared engagement and enjoyment. The final example in this section is an adaptation of the stronglyconstrained SCoSS paradigm first described in Chapter 3 to meet the needs of learning-disabled autistic children. We realised this structured

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

117

set-up could be useful in autism when we took our original desktop software to a primary school for children with special needs. The first pair of autistic children to take part completed the joint sorting task very quickly, sitting side-by-side and with little visible evidence that they were checking in with each other, sharing glances or other signs of joint attention that we were used to seeing in typically-developing children completing the same task. They did seem to be observing their partner’s actions on the other screen peripherally, which presumably supported their high level of coordination. It is important to note that we did not aim to train the children to behave in neurotypical ways, but observed ways that the children managed to engage with each other, presumably in styles they found comfortable. This experience led us to the development of the Chatlab Connect app for dual screens, designed to be especially accessible to autistic children with learning disabilities and little or no speech. Of course, touchscreen tablets also have an accessibility advantage over desktop-and-mouse for such children, who may also have motor impairments that make using such equipment difficult. Our studies with the app involve comparing the dual-screen set-up with the same task in a single-screen version, enabling us to assess what difference is made by specific design features of the technology (see Fig. 6.1). In the dual-screen version, children have their own space, and control over that, but the two children’s spaces are made contingent: each person’s space responds to the partner’s choices, explicitly represents points where children agree, and constrains actions at certain points: children have to agree before moving on.

Fig. 6.1 Chatlab Connect dual-tablet app, based on Shared Control of Separate Space (SCoSS)

118

N. YUILL

We filmed pairs of autistic children (and teacher–autistic child pairs) seated side-by-side sorting pictures together with the app, and analysed their interaction according to the awareness each showed of the other person. We coded two types of awareness, comparing the dual display setup (Fig. 6.1) with a single display version (similar to the desktop comparison we made, see Fig. 3.3). Attentional awareness involves watching the partner’s activity on the game, for example pausing to track their finger moving an object on the screen. Active other-awareness is a clear indicator of shared attention, as well as contingency, in that the child responds in relation to the partner’s perceived or actual action. For example, a child might place their picture and then watch and wait until the partner places a matching picture, or they might imitate a partner’s action in order to create a successful match. Children cannot move on in the game until they achieve a successful match with their partner, providing a strong encouragement to attend to the partner’s actions and choices. Children using the dual-tablet set-up showed more awareness of both kinds, when working with both child and adult partners, than the same children did in a single-tablet set-up (Holt & Yuill, 2014, 2017). When 22 low-verbal autistic children aged 5 to 19, attending special schools, trialled the app daily over several weeks with their support worker, the children showed increased awareness and increased language use in the game itself, as well as significant increases in social initiations and responses on a separate assessment of their social communication with an experimenter, compared to a waiting list control group (Holt et al., in preparation). Further, the gains in social approach increased with greater time spent using the app. Attention to, and awareness of another person can therefore be achieved using technology, if appropriate constraints are in place to encourage collaborative action. Particularly important is evidence that the behaviour might be generalised to other interactions. It is important to mention another feature of this project without which we would not have seen the children’s high levels of engagement. It was crucial that the picture-sorting game involved images of interest to a particular child on a particular day: the app makes it easy to use bespoke materials. We learned from experience that autistic children engaged most effectively when the materials were their own choice and that this preference might differ from one day to another. Indeed, on one unfortunate school visit where staff had been unable to upload children’s preferred materials, meaning that we used generic material, we had a success rate of 0% in getting the children to agree to take part, compared to fairly

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

119

universal agreement in our larger study. Selection of images is also a useful prompt for adults working with the children to exercise their curiosity and ingenuity about what interests the children. In summary so far, both structured, constrained strategies and more exploratory designs have been used with autistic children across a range of verbal communication abilities. Designs for more verbal children have tended to be structured, and similar to those for TD children but with greater emphasis and support for working together, on the assumption that children can be guided to show similar sorts of collaborative behaviour as TD children. Collaborative technologies designed with lessverbal children in mind are more frequently exploratory, often focusing on physical effects and causality, and with more open definitions of what might count as social connection. However, there is evidence that using constraints can also be effective for supporting awareness of the other in this group of children. This leads us on to a more critical perspective on what collaboration might mean in autism.

6.6

Autism and Collaboration: Altering Perspectives

The review of work in minimally-verbal autistic children above has mentioned ways that technology has supported engagement, communication, attention and contingency. However, this does not directly address full collaboration as defined in Chapter 1, notably the continued negotiation of shared understanding. With autism as a lens through which to view collaboration, what might shared understanding mean? This question has been thrown into sharp relief by Milton’s (2012) articulation of the double empathy problem: just as neurotypical people might find it difficult to understand autistic people, those people themselves may equally find so-called neurotypicals hard to fathom. Discussion of shared understanding needs to be underpinned by the appreciation that such understanding, being shared, is intrinsically a relationship measure: not ‘two understandings’ but an understanding co-created by two or more people, or at least a mutual appreciation of the other’s understanding. In this section, I address three ways in which work in autism challenges standard approaches to collaboration: the nature of shared understanding, the closeness of coupling of actions and the role of language and embodied action.

120

N. YUILL

Some authors have argued that the nature of shared understanding might be different within autistic partnerships compared to TD ones. Heasman and Gillespie (2018, p. 25) observed autistic adults playing co-located collaborative video games and argued for “a different type of sociality, one that permits periods of incoherent and fragmented dialogue in favor of pockets of intense rapport, reciprocation, and humor”. The authors argue that participants in these interactions made “generous assumptions about common ground”—that is, they might switch topics without warning, which could fragment the conversation but also create shared humour, and they made low demands for coordinated conversation, resulting in misunderstandings and unanswered questions, with both seeming entirely unproblematic to the participants. This loose pattern of interaction ‘worked’ in the sense that 19 of the 20 sessions observed were completed satisfactorily. It is worth noting that YouTube and video games are by far the most popular use of technology reported by parents for their autistic children, across a wide age range (Laurie et al., 2019), with very little reference to use of autism-specific apps. Video games that are played collectively (online, so not co-located) might offer different ways of connecting with others. For example, Ringland et al. (2016) described the sense of community experienced in Autcraft, an autistic Minecraft community of children and young people around the ages of 9 to 16, with examples of online collaborative action such as slaying dragons together. The above examples address verbally-mediated interaction, co-located or remote. This brings me to the critical issue of the role of bodies in space in co-located collaborative interaction. There are two major factors here: first, there is the role of specific physical movements such as position and gesture in collaborative interaction (e.g. as discussed in the tabletop studies in Chapter 3) and second, the overall place of bodily interaction compared with that of spoken language and symbolic representation, in communication and collaboration. The shared understanding discussed in Chapters 4 and 5 focused heavily on language as an intrinsic part of shared meaning, reflecting the overwhelming focus of the literature. However, this ignores ways that people might reach shared understanding through non-verbal means, which could be very different and more reliant on embodied action. The more prominent accounts accord with views of understanding as mental representation, with language as the means through which individual mental representations of the world are shared. Theory of Mind (ToM)

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

121

is a paradigmatic example of such an approach, and in the past has been very influential in autism research because of widely-disseminated ‘ToM deficit’ accounts and research on mental state talk (Happé & Frith, 2020). However, De Jaegher (2013) in particular has argued for approaches to social interaction that do not rely on representational understanding. She proposes that social interaction involves ‘participatory sensemaking’, which is reliant on embodied action rather than on the role of spoken language. Her work is derived in large part from investigation of interactions in autism, based on an embodied approach and a dynamical systems perspective, and focuses on patterns of coordination of action (and conversation) in interaction. Specifically, according to this view (see Fantasia et al., 2014), cooperative interactions (collaborative in the sense used here) are possible without the ‘higher cognitive functions’ seen by many as prerequisite, because we make sense of the world by moving around in it. When we interact with others, we coordinate our movements, meaning that we are physically participating in each other’s making sense of the world, enabling us to share our sensemaking. This can reflect the sort of coordination described in early mother–infant interactions by Trevarthen (1998) and Reddy (2010) for example. This sensemaking depends on each partner in the interaction. Two implications, if this view is accurate, are that collaboration is possible with minimally-verbal autistic children (Yuill, 2014) and that teaching of, or through verbal communication, for such groups, is not the only or even the most viable approach. A recent body of research in movement synchrony and autism provides recognition of the importance of embodiment in social interactions. There is broad and ample evidence that aligned interactions in typical adults involve close synchrony of movement, gesture and positioning (Dale et al., 2020), and of language, such as similar speech patterns and grammatical constructions used being associated with greater liking and more cooperation (Hove & Risen, 2009). There is also evidence of lower movement synchrony if one interactant is autistic. Even so, a systematic review of the literature by Glass and Yuill (in preparation) shows evidence for some level of synchrony, albeit lower, in such cases. That review also addresses the question of whether there might be more alignment, synchrony or understanding between matched partners (two TD children, or two ASC children) than between mismatched ones (an autistic and a TD child).

122

N. YUILL

Synchrony is implicitly recognised as a powerful vehicle for supporting interaction in therapeutic practice involving non-verbal autistic people. For example, Intensive Interaction (Nind, 1999) involves a therapist closely imitating the movements and actions of a person with autism so that the two thereby fall into synchrony, or into contingent movement. The idea is that the autistic person’s recognition of this contingency creates shared meaning that both can appreciate: when I move like this, you move too, and we notice the coordination of our movements. Technology to support synchrony, for example using Microsoft Kinect to convert body movements of a child and therapist into sounds, may be a promising way to support experiences of shared meaning (Ragone et al., 2020). More generally, music technology has inspired ways of making connections with autistic and language-impaired children, with systems for shared music-making such as the Reactable (Villafuerte et al., 2012). Wearable devices enable sensitive detection of movement synchrony that we might otherwise not notice: for example, Ward et al. (2018) fitted a group of 10 autistic children with wrist-worn accelerometers while they participated in performances of A Midsummer Night’s Dream with a theatre group. Analysis of patterns showed ‘subtle movements of social coordination’ (ibid., p. 148), such as a child making stimming movements of his hand in time with the rhythm of the actors’ movements, and another child making movements in synchrony with those of his twin on the far side of the stage. Finally, shared understanding is of course not exclusively about getting autistic children to understand something we may want to teach them (such as collaborative skills). Another front in which autism has pushed the research agenda is in the question of participatory design (PD) of technology and participatory involvement in research more generally. The PD approach is to involve autistic people fully in the design process, from what questions we should be asking to what should be designed and how we might approach the design process. These approaches have been extended across a wide spectrum of people, including autistic children (e.g. Frauenberger et al., 2019). The upshot of this section is that shared understanding, particularly in less-verbal children, can be sought through action synchrony, and technology provides a potentially valuable means of doing this, though currently there is limited evidence about the influence on immediate behaviour synchrony and on effects lasting beyond an intervention session. The same aspects of the Co-EnACT framework, of engagement,

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

123

attention, contingency and control, are relevant in autism, but help us reflect differently on what shared understanding might mean.

References Alcorn, A. M., Ainger, E., Charisi, V., Mantinioti, S., Petrovi´c, S., Schadenberg, B. R., Tavassoli, T., & Pellicano, E. (2019). Educators’ views on using humanoid robots with autistic learners in special education settings in England. Frontiers in Robotics and AI, 6, 107. Alcorn, A. M., Pain, H., & Good, J. (2014). Motivating children’s initiations with novelty and surprise: Initial design recommendations for autism. In Proceedings of the 2014 Conference on Interaction Design and Children (pp. 225–228). Aarhus. Battocchi, A., Pianesi, F., Tomasini, D., Zancanaro, M., Esposito, G., Venuti, P., Ben Sasson, A., Gal, E., & Weiss, P. L. (2009). Collaborative puzzle game: A tabletop interactive game for fostering collaboration in children with autism spectrum disorders (ASD). In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (pp. 197–204). ACM. Bauminger-Zviely, N., Eden, S., Zancanaro, M., Weiss, P. L., & Gal, E. (2013). Increasing social engagement in children with high-functioning autism spectrum disorder using collaborative technologies in the school environment. Autism, 17 (3), 317–339. Boyd, L. E., Ringland, K. E., Haimson, O. L., Fernandez, H., Bistarkey, M., & Hayes, G. R. (2015). Evaluating a collaborative iPad game’s impact on social relationships for children with autism spectrum disorder. ACM Transactions on Accessible Computing (TACCESS), 7 (1), 1–18. Crowell, C., Mora-Guiard, J., & Pares, N. (2019). Structuring collaboration: Multi-user full-body interaction environments for children with autism spectrum disorder. Research in Autism Spectrum Disorders, 58, 96–110. Crowell, C., Sayis, B., Benitez, J. P., & Pares, N. (2020). Mixed reality, full-body interactive experience to encourage social initiation for autism: Comparison with a control nondigital intervention. Cyberpsychology, Behavior, and Social Networking, 23(1), 5–9. Dale, R., Bryant, G. A., Manson, J. H., & Gervais, M. M. (2020). Body synchrony in triadic interaction. Royal Society Open Science, 7 (9), 200095. De Jaegher, H. (2013). Embodiment and sense-making in autism. Frontiers in Integrative Neuroscience, 7 , 15. Department for Education. (2019). Special educational needs in England: January 2019. Retrieved May 10, 2021, from https://assets.publishing.ser vice.gov.uk/government/uploads/system/uploads/attachment_data/file/ 814244/SEN_2019_Text.docx.pdf.

124

N. YUILL

Fantasia, V., De Jaegher, H., & Fasulo, A. (2014). We can work it out: An enactive look at cooperation. Frontiers in Psychology, 5, 874. Farr, W., Yuill, N., & Hinske, S. (2012). An augmented toy and social interaction in children with autism. International Journal of Arts and Technology, 5(2), 104–125. Farr, W., Yuill, N., & Raffle, H. (2010). Social benefits of a tangible user interface for children with autistic spectrum conditions. Autism: The International Journal of Research and Practice, 14(3), 237–252. Frauenberger, C., Spiel, K., & Makhaeva, J. (2019). Thinking outside the box—Designing smart things with autistic children. International Journal of Human-Computer Interaction, 35(8), 666–678. Gal, E., Bauminger, N., Goren-Bar, D., Pianesi, F., Stock, O., Zancanaro, M., & Weiss, P. L. T. (2009). Enhancing social communication of children with high-functioning autism through a co-located interface. AI & Society, 24(1), 75–84. Glass, D., & Yuill, N. (in preparation). Social motor synchrony in autism spectrum conditions: A systematic review. University of Sussex. Happé, F., & Frith, U. (2020). Annual research review: Looking back to look forward—Changes in the concept of autism and implications for future research. Journal of Child Psychology and Psychiatry and Allied Disciplines, 3, 218–232. Heasman, B., & Gillespie, A. (2018). Neurodivergent intersubjectivity: Distinctive features of how autistic people create shared understanding. Autism, 23(4), 910–921. Holt, S., Viner, H., & Yuill, N. (in preparation). Controlled trial of an intervention to support collaboration in learning-disabled autistic children. University of Sussex. Holt, S., & Yuill, N. (2014). Facilitating other-awareness in low-functioning children with autism and typically-developing preschoolers using dual-control technology. Journal of Autism and Developmental Disorders, 44(1), 1–13. Holt, S., & Yuill, N. (2017). Tablets for two: How dual tablets can facilitate other-awareness and communication in learning disabled children with autism. International Journal of Child-Computer Interaction, 11, 72–82. Hourcade, J. P., Williams, S. R., Miller, E. A., Huebner, K. E., & Liang, L. J. (2013). Evaluation of tablet apps to encourage social interaction in children with autism spectrum disorders. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 3197–3206). ACM. Hove, M. J., & Risen, J. L. (2009). It’s all in the timing: Interpersonal synchrony increases affiliation. Social Cognition, 27 (6), 949–960. Kapp, S. K., Gillespie-Lynch, K., Sherman, L. E., & Hutman, T. (2013). Deficit, difference, or both? Autism and Neurodiversity. Developmental Psychology, 49(1), 59.

6

AUTISM AND TECHNOLOGY FOR COLLABORATION

125

Keay-Bright, W., & Howarth, I. (2012). Is simplicity the key to engagement for children on the autism spectrum? Personal and Ubiquitous Computing, 16(2), 129–141. Laurie, M. H., Warreyn, P., Uriarte, B. V., Boonen, C., & Fletcher-Watson, S. (2019). An international survey of parental attitudes to technology use by their autistic children at home. Journal of Autism and Developmental Disorders, 49(4), 1517–1530. Legoff, D. B., & Sherman, M. (2006). Long-term outcome of social skills intervention based on interactive LEGO® play. Autism, 10(4), 317–329. Lorah, E. R., Parnell, A., Whitby, P. S., & Hantula, D. (2015). A systematic review of tablet computers and portable media players as speech generating devices for individuals with autism spectrum disorder. Journal of Autism and Developmental Disorders, 45(12), 3792–3804. Milton, D. E. M. (2012). On the ontological status of autism: The ‘double empathy problem.’ Disability & Society, 27 (6), 883–887. Moll, H., & Tomasello, M. (2007). Cooperation and human cognition: The Vygotskian intelligence hypothesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 362(1480), 639–648. Mora-Guiard, J., Crowell, C., Pares, N., & Heaton, P. (2017). Sparking social initiation behaviors in children with autism through full-body Interaction. International Journal of Child-Computer Interaction, 11, 62–71. National Center for Education Statistics. (2019). Students with disabilities. Retrieved May 10, 2021, from https://nces.ed.gov/fastfacts/display.asp? id=64. Nind, M. (1999). Intensive interaction and autism: A useful approach? British Journal of Special Education, 26(2), 96–102. Onaolapo, A. Y., & Onaolapo, O. J. (2017). Global data on autism spectrum disorders prevalence: A review of facts, fallacies and limitations. Universal Journal of Clinical Medicine, 5(2), 14–23. Parsons, S. (2015). Learning to work together: Designing a multi-user virtual reality game for social collaboration and perspective-taking for children with autism. International Journal of Child-Computer Interaction, 6, 28–38. Parsons, S. (2016). Authenticity in virtual reality for assessment and intervention in autism: A conceptual review. Educational Research Review, 19, 138–157. Ragone, G., Good, J., & Howland, K. (2020). OSMoSIS: Interactive sound generation system for children with autism. In Proceedings of the 2020 ACM Interaction Design and Children Conference: Extended Abstracts (pp. 151– 156). ACM. Reddy, V. (2010). How infants know minds. Harvard University Press.

126

N. YUILL

Ribeiro, P. C., & Raposo, A. B. (2014). ComFiM: A game for multitouch devices to encourage communication between people with autism. In IEEE 3rd International Conference on Serious Games and Applications for Health (SeGAH) (pp. 1–8). Ringland, K. E., Wolf, C. T., Faucett, H., Dombrowski, L., & Hayes, G. R. (2016). “Will I always be not social?” Re-conceptualizing sociality in the context of a minecraft community for autism. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 1256–1269). Russell, G., Mandy, W., Elliott, D., White, R., Pittwood, T., & Ford, T. (2019). Selection bias on intellectual ability in autism research: A cross-sectional review and meta-analysis. Molecular Autism, 10(1), 9. Silva-Calpa, G. F. M., Raposo, A. B., & Ortega, F. R. (2020). Collaboration support in co-located collaborative systems for users with autism spectrum disorders: A systematic literature review. International Journal of Human– Computer Interaction, 1–21. Silva-Calpa, G. F. M., Raposo, A. B., & Suplino, M. (2018). CoASD: A tabletop game to support the collaborative work of users with autism spectrum disorder. In IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH) (pp. 1–8). Trevarthen, C. (1998). The concept and foundations of infant intersubjectivity. In S. Bråten (Ed.), Intersubjective communication and emotion in early ontogeny (pp. 15–46). Cambridge University Press. Villafuerte, L., Markova, M., & Jorda, S. (2012). Acquisition of social abilities through musical tangible user interface: Children with autism spectrum condition and the reactable. In CHI’12 Extended Abstracts on Human Factors in Computing Systems (pp. 745–760). ACM. Ward, J. A., Richardson, D., Orgs, G., Hunter, K., & Hamilton, A. (2018). Sensing interpersonal synchrony between actors and autistic children in theatre using wrist-worn accelerometers. In Proceedings of the 2018 ACM International Symposium on Wearable Computers (pp. 148–155). ACM. Wood, R. (2019). Inclusive education for autistic children: Helping children and young people to learn and flourish in the classroom. Jessica Kingsley. Yuill, N. (2014). Going along with or taking along with: A cooperation continuum in autism? Frontiers in Psychology, 5, 1266. Yuill, N., Hinske, S., Williams, S. E., & Leith, G. (2014). How getting noticed helps getting on: Successful attention capture doubles children’s cooperative play. Frontiers in Psychology, 5, 418. Yuill, N., & Rogers, Y. (2012). Mechanisms for collaboration: A design and evaluation framework for multi-user interfaces. ACM Transactions on Computer-Human Interaction (TOCHI), 19(1), 1–25.

CHAPTER 7

Conclusion

Abstract The Co-EnACT collaboration framework presented in Chapter 1 has served to examine a range of research addressing how children work and play together using different digital technologies. In this conclusion, I summarise the framework and its underpinning, and then briefly mention aspects I have had less to say about: the role of adults, and of cultures, availability of and organisational structures in technology use, implications for teaching and supporting collaboration, and some of the learning about co-located collaboration taken from the experience of a global pandemic. Keywords Co-EnACT framework · Participatory research · Context · Personalisation · Relationships · In-person vs online

7.1

The Co-EnACT Framework

The Co-EnACT framework starts with engagement: how can a technology first encourage children to become interested in a task? Audio and video are most commonly used as hooks, and can work well together. For example, in the Augmented Knights’ Castle (AKC), audio produces a shared, time-limited event, creating collective visual attention that conduces to closer social participation. There is further potential for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3_7

127

128

N. YUILL

using tangible and olfactory materials to engage children in collaboration, including children with sensory impairments (e.g. Cullen & Metatla, 2019). These hooks also allow children to hover on the edge, observing others using an unfamiliar apparatus, before making a commitment to jump in. However, focusing solely on engagement can also present a pitfall: providing too many dazzling effects can draw children away from attending to each other, as in an e-book designed with so many extras that very little reading takes place. The aim for collaboration needs to be to engage children with each other through the technology, rather than engaging them with the technology itself. From initial engagement, collaboration requires joint attention, usually to objects or events. How does the technology provide for its users to be involved in the same experience? The perspective of embodied cognition shows that we readily pick up what others are attending to, or plan to do, from physical cues such as body orientation, eye gaze, anticipatory arm movements and hovering fingers over an object. Tangible technologies and large surfaces designed with these aspects in mind can really support such shared awareness in more obvious ways than a cursor on a small screen can. As we saw in Chapter 2, for transitions into closer engagement, sharing attention is a stepping stone towards collaborative interaction. Digital technology is particularly good for supporting different means of control and contingency of actions, as in the example of multi-touch tables with user identification (Chapter 3). Technology provides many different ways of connecting people’s actions, so can be tricky to get right. Crucial questions in design include: who can control actions in the technology, when and how can they do so, and how does one person’s action affect another’s? Control is designed into technology in various ways: a mouse or touch on a screen, gestures, or tangible objects for which movement creates digital effects. Crucially, in collaboration, control can be single or shared. Single control alone does not conduce to collaboration as defined here: the way that one person’s action is contingent on another’s in shared control is a key factor. The Chatlab Connect dualtablet app (Chapters 3 and 6) combines shared control and contingency so that players need to manage actions together to achieve their goal. My view is that control should not lie completely in the technology: there is a crucial role for the human element and we are generally very well adapted to collaborating. Putting all the control in the technology works against the opportunity to experience contingency and would not help children

7

CONCLUSION

129

to develop their capacity to manage shared control. Just discovering the experience of contingency with another person can be a very powerful means of establishing communication with autistic children who use no spoken language, for example (see also Jaswal et al., 2020). Engagement, attention, contingency and control support children in sharing understanding together, the final element of the Co-EnACT acronym. The object of collaboration, according to Roschelle and Teasley’s definition, is to support the continued negotiation of meaning between agents. It is by being involved in collaboration that children develop the competence to do so: collaboration itself then becomes an engine for developing other capacities, such as sharing others’ perspectives, interests and ideas and co-creating new understandings. In Vygotskian terms, understanding constructed between people becomes internalised to become one’s own comprehension. Collaboration also brings space for expressing your own views and hearing from others, learning about trust and sharing emotional responses to success and failure in joint endeavours. These human elements can be addressed at micro and macro levels. My approach tends towards the fine analysis of video interactions, because I have found that understanding small moments of attuned interaction can shed light on longer-term processes through which collaboration is negotiated. This accords with the principles of the therapeutic approach of Video Interaction Guidance (VIG: see Chapter 4). The tangible technology described in Chapter 2 is a good example of how in prompting small moments of shared attention, technology can support more extended sequences of collaboration.

7.2

The Wider Context

The Co-EnACT model relies heavily on examining features of the material environment. The flexibility of tangible technologies in particular shows how valuable these material features can be: they loom especially large for autistic children, differentiating experiences that can be either comfortable or utterly intolerable. But there is a whole cast of adult humans—designers, parents and teaching staff—who have powerful roles in creating the physical and social shape and management of children’s material environments. A vast body of implicit knowledge about the design of settings is drawn on every day by those who work with

130

N. YUILL

and care for children, in schools and at home, that is seriously underrepresented in research. Classrooms may not often be designed by or with their users, and the materials available are constrained by budget-holders, yet design makes all the difference for such users. This fact speaks to the value of having more participatory approaches in research. A productive example that I experienced was the co-design of a new school playground for autistic children, which brought increases in children’s social initiations compared to their activity in the old playground (Yuill et al., 2007). The Digital Stories approach (autismtransitions.org; Parsons et al., 2020) provides a remarkable example of collaboration among all research partners, including children, in capturing the sensory experiences of the nursery environment to support school transition in 4-year-old autistic pre-schoolers. I have only briefly mentioned the crucial role of adult scaffolding of children’s collaborative interactions (Chapter 5). In our own studies, we generally left the children to manage the interaction, only intervening for technical problems. Collaboration between child and adult naturally differs from peer collaboration. For example, we found that 2–4-yearolds using a SCoSS interface (Chapter 4) showed more awareness of their partner with a peer than with an adult partner: adults were treated more as facilitators rather than as equal partners (Holt & Yuill, 2014). Nevertheless, skilful adult partners supported highly contingent interactions with learning-disabled autistic children using our Chatlab Connect app (Chapter 6). These interactions depend on the history of these relationships as well as on the resources present. I have had little to say about the wider context, particularly the social values and cultural assumptions on which research studies are carried out. Here, insightful cross-cultural work, by authors such as Rogoff (see Chapter 4), is needed to understand how collaboration might be enacted differently. One illustration is the work of Li et al. (2010) in Hong Kong, who articulated the cultural values that supported collaborative working with technology in schools. Cultural differences are apparent, for example, in conventions about personal space and willingness to make physical contact, but the general drive to collaborate and its value for solving problems and discovery seems to be a powerful universal force in human cultural evolution (Moll & Tomasello, 2007). Chapter 6, on autism, examines some of the assumptions made about collaboration that are challenged by work with this group of participants.

7

CONCLUSION

131

How children collaborate is powerfully determined by the technology available and the organisational structures in which it is used. Currently, there is a strong agenda for making the most easily available screen technologies ‘personal’ in ways that can militate against their use to support collaboration. Some of the technologies described here as usefully supporting collaboration, such as tabletops and tangibles, are not widely available, and the design of others, such as our Chatlab Connect dualtablet app, have to subvert some features of ‘personal’ design so as to support co-working. As for the organisational level, finding ways to put collaboration in the forefront of educational practices and assessment is a further major challenge.

7.3

Collaboration as a Relationship

In Chapter 6, I made an argument for seeing collaboration as involving interpersonal processes underpinned by synchrony and contingency: this implies we can best foster collaboration through looking at groups rather than individuals. These interpersonal processes are shaped by all participants involved. Supporting collaboration cannot be just about helping a single individual to collaborate. Obvious as this might seem, it’s surprising how often interaction is written about as if this were the case. As with other aspects of interaction discussed in this book, collaboration is not a property of a single person, but a process between people that arises through interaction. To say that one person is ‘not communicating well’ overlooks communication as a two-way process: we need to view it from the perspectives of both partners to understand how to support successful interaction. Of course, one person may need initially to take more of a leading role than the other, but the aim is to move towards interactions that are more equitable in the contributions made by each person. Garte (2020) expresses this clearly: the group (or pair) is the unit of analysis, not the individual. As she notes, features of the physical environment have a strong influence, and collaborative competence should involve a framework of “shared activity and collective goals during peer interactions” (ibid., p. 30) to understand social competence, instead of focusing exclusively on individual skill development. This perspective could help explain why some single-user apps to train social skills have not been conspicuously successful, particularly when researchers seek generalisation to everyday behaviour beyond verbalised knowledge. For example, a carefully-controlled study by Fletcher-Watson et al. (2016) evaluated

132

N. YUILL

the FindMe app, which gave autistic children under 6 access over two months to the games, which practised attending to people and detecting line of gaze, using screen images. A randomised controlled trial with 54 pre-schoolers found no differences between children given the intervention and no-treatment controls in post-test assessments of real-world social behaviour, administered both immediately after and 6 months later, despite children showing high engagement with the app and parents rating it positively.

7.4

Collaboration In-Person and Online

This book was written during Covid-19 restrictions, when so many interactions moved overnight from in-person to online, providing a stark opportunity to reflect on the differences that physical bodies and objects make to the ways we collaborate. How can an online group call provide possibilities for collaboration equivalent to being seated together round a table working with materials and shared pieces of paper? How do the Co-EnACT mechanisms described here translate to small-group collaboration online? The brief suggestions below are of course speculative, but are informed by data from the Zoom or Room project (Yuill, 2020) where we observed, interviewed and surveyed clinical practitioners on how they experienced interacting with clients online, compared to in-person. Initial engagement is not usually a barrier, given that video calls are generally timetabled, but practitioners working with young children did report difficulty in maintaining children’s engagement just through a video channel. Clearly, relationship quality and acquaintance level will make a difference here. One means of engaging online is the ‘droppingin’ feature enabled by smart speakers, whereby video and audio channels are kept open over a period of time, to simulate aspects of being in the same room together and being able to engage intermittently when there is something of interest, rather than the pressure to fill dead air by talking when seated face-to-face. Immersive, mixed-reality technologies for video conferencing, such as the Microsoft HoloLens project, should enable increasingly embodied means of interacting online. I do not attempt to review or predict how online collaborative technologies might develop in future, but it will be interesting to observe how children will extend their in-person competencies in adapting to new modes of interaction. Sharing attention often feels challenging online because conversational partners usually look at the screen rather than directly into the webcam

7

CONCLUSION

133

(something that can be improved by adjusting the webcam position). The practitioners we interviewed also felt they missed picking up on subtle non-verbal cues, partly because of the lack of shared physical space (e.g. detecting someone stiffening muscles in anxiety) and partly because online calls usually involve just ‘talking heads’. Practitioners relied more on verbal expression than on non-verbal cues—clearly not ideal for child participants who may be less verbally expressive. However, the online play sessions practitioners described generally involved children not being seated close to the webcam, but standing up further away, giving more freedom of movement and richer information about bodily movement within their environment. Our interviewees also noted the value of sharing attention to an external object for establishing closer connections, such as using a shared screen demonstration or having children show and tell using objects within their own space. There is a large HCI literature on co-presence, workspace awareness and collaboration online, and establishing shared awareness is an important part of this work. Contingency and control are similarly more constrained in online spaces (at least those generally available now), but different forms of video-call software have features to enable screen-sharing and co-creation of documents. Visual channels have been easier to support online compared to tangible means, such as shared surfaces, or to audio channels, as so many musicians found when trying to create synchronised performances. The ways that online technologies might need to be adapted to support concurrent collaboration are usefully informed by understanding how features of our material environment support co-located collaboration. Pandemic restrictions in schools highlighted the distinctive role of co-located collaboration but also helped reflection on how to support collaboration remotely, simulating features of in-person interaction, and also how in-person environments might better serve the diverse needs of children. Many children, and those with special needs particularly, reportedly thrived in the smaller, more structured ‘bubble’ environments adopted by schools in England, compared with both the previous largegroup school structure and with their experiences of online collaboration. Unsurprisingly, parents and teachers expressed strong concerns during periods of online learning about the use of screen technology as an electronic babysitter. However, this depiction frames available technologies as being exclusively personal. The previous chapters have provided examples of how technology designed to be personal can be used in ways that are collaborative. I hope that this book has demonstrated different ways that

134

N. YUILL

technologies can be designed and used to support collaboration, offering as many opportunities as possible for children to develop the collaborative capabilities that will be so important to them throughout life.

References Cullen, C., & Metatla, O. (2019). Co-designing Inclusive Multisensory Story Mapping with Children with Mixed Visual Abilities. Proceedings of the 18th ACM International Conference on Interaction Design and Children (pp. 361– 373). New York: ACM. Fletcher-Watson, S., Petrou, A., Scott-Barrett, J., Dicks, P., Graham, C., O’Hare, A., Pain, H., & McConachie, H. (2016). A trial of an iPadTM intervention targeting social communication skills in children with autism. Autism, 20(7), 771–782. Holt, S., & Yuill, N. (2014). Facilitating other-awareness in low-functioning children with autism and typically-developing preschoolers using dual-control technology. Journal of Autism and Developmental Disorders, 44(1), 1–13. Jaswal, V. K., Dinishak, J., Stephan, C., & Akhtar, N. (2020). Experiencing social connection: A qualitative study of mothers of nonspeaking autistic children. Plos one, 15(11), e0242661. Moll, H., & Tomasello, M. (2007). Cooperation and human cognition: The Vygotskian intelligence hypothesis. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 639–648. Parsons, S., Kovshoff, H., & Ivil, K. (2020). Digital stories for transition: Coconstructing an evidence base in the early years with autistic children, families and practitioners. Educational Review, 1–19. Yuill, N. (2020). “They don’t know we’ve got legs”: Meeting online and in-person. Retrieved May 11, 2021, from https://blogs.sussex.ac.uk/psycho logy/2020/11/04/they-dont-know-weve-got-legs-meeting-online-and-inperson/. Yuill, N., Strieth, S., Roake, C., Aspden, R., & Todd, B. (2007). Designing a playground for children with autistic spectrum disorders—Effects on playful peer interactions. Journal of Autism and Developmental Disorders, 37 (6), 1192–1196.

Index

A AKC. See Augmented Knights’ Castle artefacts, 6, 15, 22, 62, 65, 66, 71, 86 Augmented Knights’ Castle (AKC), 8–10, 22, 24–30, 32, 48, 55, 62, 65–71, 78, 114, 115, 127

B Benford, S., 13, 47, 48

C Co-EnACT collaboration framework, 3, 5, 10, 54, 61, 71, 73, 83, 106, 122, 127 collaboration, 2–5, 8, 10–17, 21–24, 35, 36, 39–42, 44, 45, 47, 48, 50–52, 54, 55, 61–66, 69, 71–79, 83–96, 98, 105–111, 113, 115, 116, 119–121, 127–134 constraints, 8, 35, 47, 54, 55, 62, 73, 76, 78, 90, 96, 109, 112, 115, 118, 119

contingent/contingency, 8, 10, 16, 23, 27, 28, 31, 40, 41, 50–55, 61, 64, 65, 72, 73, 86, 89, 91, 92, 108, 109, 117, 122, 128, 130 cooperation, 11, 12, 40, 51, 52, 63, 64, 66, 69, 70, 77, 95, 121 cooperative play, 4, 8, 9, 22, 24, 27, 29, 30, 55, 62, 64, 66–68, 70, 71, 114, 115 COSPATIAL project, 109 Crook, Charles, 14, 89 culture, 12, 15, 84, 89, 127

D De Jaegher, H., 16, 121 developmental psychology, 4, 15 dialogue, 5, 14, 15, 50, 62, 69, 70, 72, 86, 92, 106, 111, 120 DigiTile, 48, 49, 62, 72, 77, 78

E embodied cognition, 16, 128

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 N. Yuill, Technology to Support Children’s Collaborative Interactions, https://doi.org/10.1007/978-3-030-75047-3

135

136

INDEX

entry point, 24, 114, 115 evolution, 12, 130 H HCI, 2, 4, 5, 11, 13–15, 21, 24, 32, 133 Higgins, S., 85, 87, 98 honeypot effect, 25, 26, 71, 114 Human–Computer Interaction. See HCI I interactive whiteboard (IWB), 83, 85–89, 93, 96–100 inter- and intra-psychological, 15, 42, 62 iPad. See tablet IWB. See interactive whiteboard (IWB) K Kerawalla, L., 14, 54, 74, 93 KidPad, 13, 48, 53 L Lands of Fog, 26, 111, 112 language, 4, 11, 16, 62, 64, 68–70, 99, 106, 107, 114, 118–122, 129 legacy bias, 87, 97 M mechanisms, 5, 8, 9, 12–14, 16, 17, 22–24, 47, 49, 72, 98, 107, 109, 112, 132 Mercer, N., 69, 87, 92, 93, 99 multi-touch table. See tabletop N narrative, 10, 24, 26, 52, 55, 67–70, 72, 75, 109

novelty, 26, 68, 71 O OurSpace, 43–45 P Parten, M.B., 22, 23 PISA, 95, 96 play states. See sequence Programme for International Student Assessment. See PISA R Rogers, Yvonne, 3, 13, 15, 42, 43, 49, 54, 83, 84, 106 Roschelle, J., 11, 12, 14, 16, 45, 59, 71, 74, 76, 127. See also Teasley, S.D. S scaffolding, 16, 63–65, 77, 92, 130 SCoSS, 41, 51–55, 72–74, 76, 78, 116, 130 second-person approaches, 51 Separate Control of Shared Space. See SCoSS sequence/play, 24, 28, 40, 67, 71, 108, 112, 129 shared control, 10, 39–42, 45, 47, 50, 51, 55, 72, 73, 86, 87, 100, 108, 109, 128, 129 shared display, 35, 90, 91, 93, 96, 99, 100 shared reading, 5, 7, 8, 25, 31, 42, 62, 88, 107 ShareIT project, 42, 66 size/screen size, 6, 28, 35, 42, 84, 90, 91, 109, 110 SLANT, 69, 70, 110 smart speaker, 55, 132

INDEX

Social Pedagogic Research into Group Work. See SPRinG social skills/competence, 65, 113, 115, 129, 131 Spoken Language and New Technology. See SLANT SPRinG, 91, 92, 94 synchrony, 4, 5, 16, 51, 79, 109, 111, 121, 122, 131 T tablet, 5–7, 25, 35, 45, 54, 65, 73, 84, 85, 88–92, 96–100, 110–113, 116–118, 128, 131 tabletop, 5, 31–33, 35, 42–45, 47, 49, 51, 54, 65, 76, 78, 86, 89–91, 96, 98, 99, 109–111, 116, 120, 131

137

tangibles, 5, 6, 35, 36, 48, 75, 106, 114, 115, 128, 129, 131, 133 Teasley, S.D., 11, 12, 14, 16, 45, 59, 71, 74, 76, 77, 127 technoference, 30 Thinking Together, 14, 92, 93 transition between play states. See sequences turn-taking, 40, 44–47, 50, 51, 56, 66, 78, 87, 88, 92, 107, 108, 111, 113 V virtual reality, 5, 26, 110 voice assistant, 55, 56 Vygotskian intelligence hypothesis, 12, 107 Vygotsky, L. S., 5, 15, 16, 63, 68, 75