Linguistic Typology (Oxford Textbooks in Linguistics) 9780199677092, 9780199677498, 0199677093

This textbook provides a critical introduction to major research topics and current approaches in linguistic typology, t

174 88 4MB

English Pages 432 [533] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Linguistic Typology (Oxford Textbooks in Linguistics)
 9780199677092, 9780199677498, 0199677093

Table of contents :
Cover
Linguistic Typology
Copyright
Contents
Foreword
Preface
List of abbreviations
Genealogical affiliations and geographical locations of languages cited in the book
1: Linguistic typology: An introductory overview
1.1 Introduction
1.2 What is linguistic typology?
1.3 Linguistic typology: a short history
1.4 Concluding remarks
Study questions
Further reading
Part I: Foundations: theory and method
2: Unity and diversity in the world’s languages
2.1 Introduction
2.2 The connection between diversity and unity
2.3 The Principle of Uniformitarianism: a methodological frame of reference
2.4 When and where similarities count
2.5 Types of language universals and universal preferences
2.6 Concluding remarks
Study questions
Further reading
3: Typological analysis
3.1 Introduction
3.2 ‘Comparing apples with apples’: cross-linguistic comparability
3.3 Comparative concepts vs descriptive categories
3.4 Concluding remarks
Study questions
Further reading
4: Linguistic typology and other theoretical approaches
4.1 Introduction
4.2 Conceptual differences between LT and GG
4.3 Methodological differences between LT and GG
4.4 Optimality Theory: a derivative of GG with a twist
4.5 Concluding remarks
Study questions
Further reading
5: Language samples and sampling methods
5.1 Introduction
5.2 Some recalcitrant issues in language sampling
5.3 Types of language sample
5.4 Biases in language sampling and how to avoid them
5.5 Independence of cases
5.6 Sampling procedures
5.6.1 Proportional representation in language sampling
5.6.2 Independence of cases in proportional representation
5.6.3 Having the best of both worlds: structural variation across and within phyla
5.7 Testing independence of cases at a non-genealogical level
5.8 Typological distribution over time: was it different then from what it is now?
5.9 Concluding remarks
Study questions
Further reading
6: Data collection: Sources, issues, and problems
6.1 Introduction
6.2 Grammatical descriptions or grammars
6.3 Texts
6.4 Online typological databases
6.5 Native speaker elicitation: direct or indirect
6.6 Levels of measurement and coding
6.7 Concluding remarks
Study questions
Further reading
7: Typological asymmetry: Economy, iconicity, and frequency
7.1 Introduction
7.2 Typological asymmetry
7.2.1 Formal coding
7.2.2 Grammatical behaviour
7.3 Economy and iconicity (in competition)
7.4 Typological asymmetry = frequency asymmetry?: iconicity vs frequency
7.5 Concluding remarks
Study questions
Further reading
8: Categories, prototypes, and semantic maps
8.1 Introduction
8.2 Classical vs prototype approach to categorization
8.3 Prototype category as a network of similarities
8.4 Prototype in grammar
8.5 Semantic maps: ‘geography of grammatical meaning’
8.6 Concluding remarks
Study questions
Further reading
Part II: Empirical dimensions
9: Phonological typology
9.1 Introduction
9.2 Segmental typology
9.2.1 Consonants
9.2.2 Vowels
9.2.3 Consonant–vowel ratios
9.3 Syllabic typology
9.4 Prosodic typology
9.4.1 Tone
9.4.2 Stress
9.5 Concluding remarks
Study questions
Further reading
10: Basic word order
10.1 Introduction
10.2 Some basic word order patterns
10.3 Early word order research
10.3.1 Greenberg’s seminal work and other ‘derivative’ works
10.3.2 Bringing word order inconsistencies to order
10.3.3 Distribution of the six basic clausal word orders
10.4 The OV–VO typology and the Branching Direction Theory
10.4.1 Back to the OV–VO typology
10.4.2 Inadequacy of Head-Dependent Theory
10.4.3 Branching Direction Theory (BDT)
10.4.4 Further thoughts on BDT
10.5 Word order variation and processing efficiency
10.5.1 Basic assumptions of EIC Theory
10.5.2 Left–right asymmetry in word order
10.6 Structural complexity and efficiency
10.6.1 Processing principles and processing domains
10.6.2 Word order and word order correlations in Hawkins’s extended theory
NPo vs PrN in OV and VO languages
10.7 Areal word order typology
10.7.1 Areal distribution of the six clausal word orders
10.7.2 Areal distribution of OV and VO
10.7.3 Areal distribution of OV/VO and NPo/PrN
10.7.4 Areal distribution of OV/VO and RelN/NRel
10.8 Concluding remarks
Study questions
Further reading
11: Case alignment
11.1 Introduction
11.2 S-alignment types
11.2.1 Nominative–accusative type
11.2.2 Ergative–absolutive type
11.2.3 Tripartite type
11.2.4 Double oblique type
11.2.5 Neutral type
11.3 Variations on S-alignment
11.3.1 Split-ergative type
11.3.2 Active–stative type
11.3.3 Hierarchical type
11.4 Distribution of the S-alignment types
11.5 Explaining various case alignment types
11.5.1 The discriminatory view of case marking
11.5.2 The indexing view of case marking
11.5.3 The discriminatory view vs the indexing view
11.6 The Nominal Hierarchy and the split-ergative system
11.7 Case marking as an interaction between attentionflow and viewpoint
11.8 P-alignment types
11.9 Distribution of the P-alignment types
11.10 Variations on P-alignment
11.11 S-alignment and P-alignment in combination
11.12 Case alignment and word order
11.13 Concluding remarks
Study questions
Further reading
12: Grammatical relations
12.1 Introduction
12.2 Agreement
12.3 Relativization
12.4 Noun phrase ellipsis under coreference
12.5 Hierarchical nature of grammatical relations
12.6 Concluding remarks
Study questions
Further reading
13: Valency-manipulating operations
13.1 Introduction
13.2 Change of argument structure with change of valency
13.2.1 Passive
13.2.2 Antipassive
13.2.3 Noun incorporation
13.2.4 Applicative
13.2.5 Causative
13.3 Change of argument structure without change of valency
13.4 Concluding remarks
Study questions
Further reading
14: Person marking
14.1 Introduction
14.2 Morphological form: variation and distribution
14.2.1 Person marking and case alignment
14.2.2 Person marking and grammatical relations
14.3 Paradigmatic structure: towards a typology
14.3.1 Grouping A: no inclusive/exclusive with split group marking
14.3.2 Grouping B: no inclusive/exclusive with homophonous group marking
14.3.3 Grouping C: inclusive/exclusive with split group marking
14.3.4 Grouping D: inclusive/exclusivewith homophonous groupmarking
14.3.5 Structural dependencies in paradigmatic structure
14.4 Concluding remarks
Study questions
Further reading
15: Evidentiality marking
15.1 Introduction
15.2 Morphological form of evidentiality marking
15.3 Semantic parameters of evidentiality
15.3.1 Visual
15.3.2 Non-visual sensory
15.3.3 Inference and assumption
15.3.4 Hearsay and quotative
15.3.5 Order of preference in evidentials
15.4 Typology of evidentiality systems
15.4.1 Two-term systems
15.4.2 Three-term systems
15.4.3 Four-term systems
15.4.4 Languages with more than four evidentiality distinctions
15.4.5 Multiple evidentiality subsystems
15.5 Evidentiality and other grammatical categories
15.6 Concluding remarks
Study questions
Further reading
References
Author index
Language index
Subject index

Citation preview

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Linguistic Typology

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

O X F OR D T E X T B O O K S IN L I N G UI S T I C S

PUBLISHED

The Grammar of Words An Introduction to Linguistic Morphology   by Geert Booij

Pragmatics   by Yan Huang

A Practical Introduction to Phonetics   by J. C. Catford

Compositional Semantics An Introduction to the Syntax/Semantics Interface by Pauline Jacobson

An Introduction to Multilingualism Language in a Changing World by Florian Coulmas

The History of Languages An Introduction by Tore Janson

Meaning in Use An Introduction to Semantics and Pragmatics   by Alan Cruse

The Lexicon An Introduction by Elisabetta Ježek A Functional Discourse Grammar for English by Evelien Keizer

Natural Language Syntax by Peter W. Culicover

Diachronic Syntax by Ian Roberts

Principles and Parameters An Introduction to Syntactic Theory by Peter W. Culicover

Linguistic Typology by Jae Jung Song

A Semantic Approach to English Grammar by R. M. W. Dixon

Cognitive Grammar An Introduction by John R. Taylor

Semantic Analysis A Practical Introduction by Cliff Goddard

Linguistic Categorization   by John R. Taylor IN PREPARATION

Translation Theory and Practice by Kirsten Malmkjaer Grammaticalization by Heiko Narrog and Bernd Heine Speech Acts and Sentence Types in English by Peter Siemund

Cognitive Grammar An Introduction   by John R. Taylor

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Linguistic Typology JAE JUNG SONG

1

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

3

Great Clarendon Street, Oxford,  , United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Jae Jung Song  The moral rights of the author have been asserted First Edition published in  Impression:  All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press  Madison Avenue, New York, NY , United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number:  ISBN –––– (hbk.) –––– (pbk.) Printed and bound by CPI Group (UK) Ltd, Croydon,   Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Contents Foreword Preface List of abbreviations Genealogical affiliations and geographical locations of languages cited in the book

1.

3.

xix

Linguistic typology: An introductory overview



1.1 Introduction



1.2 What is linguistic typology?



1.3 Linguistic typology: a short history



1.4 Concluding remarks



Study questions Further reading

 

PART I 2.

xiii xv xvii

Foundations: theory and method



Unity and diversity in the world’s languages



2.1 Introduction



2.2 The connection between diversity and unity



2.3 The Principle of Uniformitarianism: a methodological frame of reference



2.4 When and where similarities count



2.5 Types of language universals and universal preferences



2.6 Concluding remarks



Study questions Further reading

 

Typological analysis



3.1 Introduction



3.2 ‘Comparing apples with apples’: cross-linguistic comparability



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CONTENTS

4.

5.

6.

3.3 Comparative concepts vs descriptive categories



3.4 Concluding remarks



Study questions Further reading

 

Linguistic typology and other theoretical approaches



4.1 Introduction



4.2 Conceptual differences between LT and GG



4.3 Methodological differences between LT and GG



4.4 Optimality Theory: a derivative of GG with a twist



4.5 Concluding remarks



Study questions Further reading

 

Language samples and sampling methods



5.1 Introduction



5.2 Some recalcitrant issues in language sampling



5.3 Types of language sample



5.4 Biases in language sampling and how to avoid them



5.5 Independence of cases



5.6 Sampling procedures 5.6.1 Proportional representation in language sampling 5.6.2 Independence of cases in proportional representation 5.6.3 Having the best of both worlds: structural variation across and within phyla

   

5.7 Testing independence of cases at a non-genealogical level



5.8 Typological distribution over time: was it different then from what it is now?



5.9 Concluding remarks



Study questions Further reading

 

Data collection: Sources, issues, and problems



6.1 Introduction



vi

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CONTENTS

7.

8.

6.2 Grammatical descriptions or grammars



6.3 Texts



6.4 Online typological databases



6.5 Native speaker elicitation: direct or indirect



6.6 Levels of measurement and coding



6.7 Concluding remarks



Study questions Further reading

 

Typological asymmetry: Economy, iconicity, and frequency



7.1 Introduction



7.2 Typological asymmetry 7.2.1 Formal coding 7.2.2 Grammatical behaviour

  

7.3 Economy and iconicity (in competition)



7.4 Typological asymmetry = frequency asymmetry?: iconicity vs frequency



7.5 Concluding remarks



Study questions Further reading

 

Categories, prototypes, and semantic maps



8.1 Introduction



8.2 Classical vs prototype approach to categorization



8.3 Prototype category as a network of similarities



8.4 Prototype in grammar



8.5 Semantic maps: ‘geography of grammatical meaning’



8.6 Concluding remarks



Study questions Further reading

 

vii

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CONTENTS

PART II 9.

10.



Empirical dimensions

Phonological typology



9.1 Introduction



9.2 Segmental typology 9.2.1 Consonants 9.2.2 Vowels 9.2.3 Consonant–vowel ratios

   

9.3 Syllabic typology



9.4 Prosodic typology 9.4.1 Tone 9.4.2 Stress

  

9.5 Concluding remarks



Study questions Further reading

 

Basic word order



10.1 Introduction



10.2 Some basic word order patterns



10.3 Early word order research 10.3.1 Greenberg’s seminal work and other ‘derivative’ works 10.3.2 Bringing word order inconsistencies to order 10.3.3 Distribution of the six basic clausal word orders

   

10.4 The OV–VO typology and the Branching Direction Theory 10.4.1 Back to the OV–VO typology 10.4.2 Inadequacy of Head-Dependent Theory 10.4.3 Branching Direction Theory (BDT) 10.4.4 Further thoughts on BDT

    

10.5 Word order variation and processing efficiency 10.5.1 Basic assumptions of EIC Theory 10.5.2 Left–right asymmetry in word order

  

10.6 Structural complexity and efficiency 10.6.1 Processing principles and processing domains

 

viii

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CONTENTS

11.

10.6.2 Word order and word order correlations in Hawkins’s extended theory



10.7 Areal word order typology 10.7.1 Areal distribution of the six clausal word orders 10.7.2 Areal distribution of OV and VO 10.7.3 Areal distribution of OV/VO and NPo/PrN 10.7.4 Areal distribution of OV/VO and RelN/NRel

    

10.8 Concluding remarks



Study questions Further reading

 

Case alignment



11.1 Introduction



11.2 S-alignment types 11.2.1 Nominative–accusative type 11.2.2 Ergative–absolutive type 11.2.3 Tripartite type 11.2.4 Double oblique type 11.2.5 Neutral type

     

11.3 Variations on S-alignment 11.3.1 Split-ergative type 11.3.2 Active–stative type 11.3.3 Hierarchical type

   

11.4 Distribution of the S-alignment types



11.5 Explaining various case alignment types 11.5.1 The discriminatory view of case marking 11.5.2 The indexing view of case marking 11.5.3 The discriminatory view vs the indexing view

   

11.6 The Nominal Hierarchy and the split-ergative system



11.7 Case marking as an interaction between attention flow and viewpoint



11.8 P-alignment types



11.9 Distribution of the P-alignment types



11.10 Variations on P-alignment



11.11 S-alignment and P-alignment in combination



ix

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CONTENTS

12.

13.

14.

11.12 Case alignment and word order



11.13 Concluding remarks



Study questions Further reading

 

Grammatical relations



12.1 Introduction

339

12.2 Agreement

348

12.3 Relativization



12.4 Noun phrase ellipsis under coreference

359

12.5 Hierarchical nature of grammatical relations

364

12.6 Concluding remarks

367

Study questions Further reading

 

Valency-manipulating operations



13.1 Introduction



13.2 Change of argument structure with change of valency 13.2.1 Passive 13.2.2 Antipassive 13.2.3 Noun incorporation 13.2.4 Applicative 13.2.5 Causative

     

13.3 Change of argument structure without change of valency



13.4 Concluding remarks



Study questions Further reading

 

Person marking



14.1 Introduction



14.2 Morphological form: variation and distribution 14.2.1 Person marking and case alignment 14.2.2 Person marking and grammatical relations

  

x

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CONTENTS

15.

14.3 Paradigmatic structure: towards a typology 14.3.1 Grouping A: no inclusive/exclusive with split group marking 14.3.2 Grouping B: no inclusive/exclusive with homophonous group marking 14.3.3 Grouping C: inclusive/exclusive with split group marking 14.3.4 Grouping D: inclusive/exclusive with homophonous group marking 14.3.5 Structural dependencies in paradigmatic structure



14.4 Concluding remarks



Study questions Further reading

 

Evidentiality marking



15.1 Introduction



15.2 Morphological form of evidentiality marking



15.3 Semantic parameters of evidentiality 15.3.1 Visual 15.3.2 Non-visual sensory 15.3.3 Inference and assumption 15.3.4 Hearsay and quotative 15.3.5 Order of preference in evidentials

     

15.4 Typology of evidentiality systems 15.4.1 Two-term systems 15.4.2 Three-term systems 15.4.3 Four-term systems 15.4.4 Languages with more than four evidentiality distinctions 15.4.5 Multiple evidentiality subsystems

   

15.5 Evidentiality and other grammatical categories

xi

    

  

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CONTENTS

15.6 Concluding remarks



Study questions Further reading

     

References Author index Language index Subject index

xii

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Foreword Jae Jung Song died peacefully at home in Dunedin, New Zealand on  April  while this book was being prepared for publication. Born on  January  to Tae Hyun Song and Jin Sun Yoo, Jae was one of four siblings, including his brother, Jae Tag. He is survived by his son, Kee, daughter-in-law Coral, and his grandson, whose birth Jae was eagerly anticipating, but sadly missed. Jae received a BA Honours and PhD from Monash University, Melbourne, Australia. After several years teaching in Australia and Singapore, Jae joined the University of Otago in . Jae was instrumental in re-establishing and expanding Otago’s Linguistics Programme to include a TESOL minor and a Graduate Diploma for Second Language Teachers. During his career he served on the editorial boards of the Web Journal of Formal, Computational and Cognitive Linguistics, the Australian Journal of Linguistics, the International Review of Korean Studies, Language and Linguistics Compass, and the New Zealand Journal of Asian Studies. He was also Associate Editor of the journal Linguistic Typology. This book is Jae’s tenth authored or edited book, capping a career that also included publication of thirtytwo book chapters and forty journal articles. Although Jae is most widely known for his extensive contributions to linguistic typology, particularly in the area of causatives, he also published on South and North Korean language policy. Many of Jae’s colleagues are unaware that Jae was a survivor of polio, which he contracted as a child in South Korea. At the height of his career he was struck by post-polio syndrome, a virtually untreatable re-emergence of the effects of polio. In the ten years I worked with Jae, he progressed through the support of a cane, one crutch, two crutches, a mobility chair and crutches, and finally a motorized wheelchair. Jae was determined not to allow the syndrome to compromise his work, and maintained a usual workload alongside his impressive publication record.

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

FOREWORD

Jae will be remembered by colleagues, students, and friends for his work in typology, but he will be missed for his unwavering determination, acute intelligence, and ready wit. Anne Feryok University of Otago Department of English and Linguistics Dunedin, New Zealand

xiv

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Preface Not many authors are fortunate enough to have an opportunity to write more than one introductory volume in the same field. I am one of the fortunate ones. My first introductory book, Linguistic Typology: Morphology and Syntax (Harlow, ), appeared sixteen years ago, and it was in great need of updating, addition, and revision, in view of a substantial amount of developments—theoretical and methodological—and new data drawn from previously as well as newly documented languages. Two introductory books have recently appeared, namely Viveka Velupillai’s An Introduction to Linguistic Typology (Amsterdam, ) and Edith Moravcsik’s Introducing Language Typology (Cambridge, ). These two are excellent introductions to the field and continue to inspire and inform students and professional linguists alike. They are, however, written primarily with students with minimal prior knowledge of linguistics in mind, and there is a need for a text that is pitched at an advanced level. I envisage that readers at this higher level of study are capable of, and keen on, grappling with theoretical or methodological issues that have characterized most of the recent developments in linguistic typology. I hope that my new book will meet that need. My earlier book was organized largely by topic area, which makes it different from William Croft’s Typology and Universals (Cambridge, ), which is organized by theoretical concept. The present book aims to strike a balance between these two different styles of organization, which I believe is the best way to introduce my target audience to the field. I would like to thank John Davey (the former Commissioning Editor at Oxford University Press) for inviting me to write a proposal for this book, and his successor Julia Steer for her forbearance and support, especially when the writing of the book ran into difficulty, more than once, because of my personal circumstances. I would also like to mention a number of linguists whose writings have influenced, or contributed to, my thinking not only about the various issues addressed in this book but also about many others. There are too many to list them here but the following eminent scholars deserve a special mention: Balthasar

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PREFACE

Bickel, Barry Blake, Guglielmo Cinque, Bernard Comrie, William Croft, Matthew Dryer, the late Joseph Greenberg, John Haiman, Martin Haspelmath, John Hawkins, Edith Moravcsik, Johanna Nichols, and, last but not least, the late Anna Siewierska. The writing of this book has also benefited from the invaluable input of students in the typology course that I have taught for the last twenty-five years at the University of Otago. One of the advantages of teaching, as many teachers will agree, is that one always tries to find a better way to explain things to students, and, hopefully, this advantage has made its way into the book. There are also a number of people who have contributed non-linguistically to the birth of this book. I am indebted to Jaetag Song and Kee Ho Song for their words of encouragement. I cannot forget to thank Eleni Witehira for her unfailing support, even in times of her own personal difficulties, and also Kun Yong Lee and Sunae Bang (Sushi Station, Dunedin) for often cooking something off the menu for me, despite their heavy workload. I also wish to express my gratitude to the following people who have never heard about linguistic typology and have kindly provided me with non-work-related assistance and support, without which the writing of this book would have been near impossible: Sarah Andrews, Lisa Begg, Dr Rene Cescon, Dr Martin Dvoracek, Hannah Fleming, Linda Grady, Lynda Hurren, Toni Johnston, Sandi Lorincz, Miriam Mackay, Michelle Mielnik, Zena Pigden, Ange Price, Dr Markus Renner, Nic Rogan, and Tammy Waugh. Lastly, I would like to thank Lisa Marr for her assistance in checking the references and preparing the indices.

xvi

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

List of abbreviations    ABL ABS ACC ALL ANIM ANTIP AOR APPL ART ASP ASSUM ATTRIB AUX BEN CAUS CLF CLT CNTR COM COMP COMPL COP DAT DEC DEF DEM DET

first person second person third person ablative absolutive accusative allative animate antipassive aorist applicative article aspect assumed attributive auxiliary benefactive causative classifier clitic contrast comitative complementizer completive copula dative declarative definite demonstrative determiner

DIR.EV DISTR DO DR DU EMPH EP

direct evidential distributive direct object direct dual emphasis epenthetic formative

ERG EXCL EZ F FIRST FOC FUT GEN IMM IMP INAN INCH INCL INCMP IND INDEF INF INFV INST INT INTR

ergative exclusive ezāfe feminine first-hand focus future genitive immediate imperative inanimate inchoative inclusive incompletive indicative indefinite inferred infinitive instrumental intentional intransitive marker

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

LIST OF ABBREVIATIONS

INTRATERM INV IO IPFV IRLS LINK LOC M N NARR NCL NEG NF NFUT NOM NOMN

intraterminal inverse indirect object imperfective irrealis linker locative masculine neuter narrative noun class negative non-final marker non-future nominative nominalization/ nominalizer

NON.FIRST NONVIS NR.PST NUM OBJ/O OBL OBV OPT PART PASS PAUC PC

non-first-hand non-visual near past number object oblique obviative optative participle passive paucal particle of concord pejorative perfect phonological filler perfective

PEJ PERF PF PFV

PL POSS PREP PRES PROCOMP PRON PROX PRTV PST PV Q QUOT REC.P REF REFL REL REP RETRO SBJ SBJV SD SENS.EV SEQ SG SUB SUBR TES TNS TOP TR TRANSLOC V VIS VOL xviii

plural possessive preposition present procomplement pronoun proximate partitive past preverb question quotative recent past referential reflexive relative reportative retrospective subject subjunctive sudden discovery sensory evidential sequence singular subordinating subordinator testimonial tense topic transitive marker translocative verb visual volitional

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Genealogical affiliations and geographical locations of languages cited in the book Each language name is followed by (X; Y: Z), where X is the name of its genus, Y the name of its language family, and Z the name of the country where it is spoken, e.g. Bayso (Eastern Cushitic; Afro-Asiatic: Ethiopia). Languages that do not belong to any known language family are identified as ‘isolates’, e.g. Korean (isolate: Korea). If there is no distinction between genus and family (i.e. a single-genus language family) or between language and genus (i.e. a single-language genus), only one genealogical name is given, e.g. Pirahã (Muran: Brazil) or Palauan (Austronesian: Palau). Genealogical and geographical information is omitted where it is deemed redundant, that is, given elsewhere in the relevant section or chapter. For further genealogical and geographical information on the world’s languages, the reader may like to visit: () http://wals.info/languoid; () http://glottolog.org/glottolog/language; and () https://www.ethnologue.com/browse/names.

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

1 Linguistic typology An introductory overview

1.1 Introduction



1.2 What is linguistic typology?



1.3 Linguistic typology: a short history



1.4 Concluding remarks



1.1 Introduction This chapter provides a brief description of linguistic typology by explaining what it is, what it aims to achieve in its study of language, what kinds of question it raises about the nature of language, and how it does what it does. The description will give the reader a snapshot of linguistic typology before they embark on reading the rest of this book. This is followed by a historical overview of linguistic typology. It is important to learn how linguistic typology emerged and developed over time into what it is today, as the reader will need to have a background understanding of how linguistic typology has evolved conceptually into one of the most important theoretical approaches to the study of language.

1.2 What is linguistic typology? The reader may have come across the kind of restaurant menu to be described here (more commonly found in Chinese than other restaurants): à la carte menu items listed under different main-ingredient

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY

headings, e.g. beef, pork, chicken, duck, seafood, and vegetables. For instance, all the dishes containing beef as their main ingredient are grouped together under the heading of ‘beef ’; the dishes with seafood as their main ingredient are all listed together under the heading of ‘seafood’, and so on. In other words, à la carte dishes are put into different types according to the main ingredients used. The basic principle employed in this way of organizing the menu is typological in that the main ingredient (e.g. beef) is used as the criterion for classifying or typologizing a given dish (e.g. beef with ginger and spring onion) in a particular way (e.g. a type of beef dish). This menu can thus be thought of as a typology of the dishes contained therein. The dishes may involve secondary ingredients as well, and any one of these ingredients may alternatively be selected as the criterion for classifying the dishes (e.g. noodle dishes vs rice dishes). Different ingredients (i.e. typological properties), when chosen as the basis for typologizing, may give rise to different typologies of the same dishes. To wit, typology (e.g. a particular way of organizing the menu items) is the classification of a domain (e.g. the menu). The same principle can be applied to the study of languages. Thus, the world’s languages can be put into different types according to a particular linguistic property. The output of this exercise will be a typology of languages or, specifically, a linguistic typology of languages, since the typological property used is a linguistic one. For instance, take adpositions, which are linguistic elements expressing the semantic relations between the verb and its related noun phrases, e.g. location, time, instrument. The English adposition in in () expresses the location of Rachel’s action: Rachel’s action took place in, as opposed to near, the garden. () English (Germanic; Indo-European: UK) Rachel washed her car in the garden. The adposition in () is known as a preposition, because it appears right before the noun phrase that it associates itself with. In languages such as Korean, adpositions (e.g. -eyse ‘in’) are placed after the noun phrase, as in (), in which case they are known as postpositions: () Korean (isolate: Korea) kiho-ka anpang-eyse thipi-lul po-ass-ta Keeho-NOM main.bedroom-LOC TV-ACC see-PST-IND ‘Keeho watched TV in the main bedroom.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

1. 2 W H A T I S L I N G U I S TI C T Y P O L O G Y?

Using the two types of adposition, it is now possible to classify languages into two different types: prepositional and postpositional languages.1 For instance, English is identified as a prepositional language, and Korean as a postpositional language. The use of adpositions as the basis of typologizing languages is a simple illustration of how to put languages into different types according to a particular typological property (or, simply put, how to typologize languages). Other typological properties may be selected as the basis for classifying languages. For instance, the grammatical roles within the clause, e.g. subject and object, may be expressed on the head or the non-head (aka the dependent) (for detailed discussion, see Chapter ). Note that in the clause, the verb is the head and its arguments (or the noun phrases that the verb associates itself with) are dependents. In Swahili, for example, the grammatical roles of the noun phrases are all indicated on the verb. ()

Swahili (Bantoid; Niger-Congo: Tanzania) Ahmed a-li-m-piga Badru Ahmed he-PST-him-hit Badru ‘Ahmed hit Badru.’

In (), the verb (or the head) contains overt marking (i.e. a- for the subject noun phrase and m- for the object noun phrase). In contrast, the noun phrases (or the dependents), Ahmed and Badru, do not have any marking that indicates their grammatical roles. This type of marking is known as head-marking (i.e. the head is marked). In Pitta-Pitta, the overt marking of the grammatical roles of the noun phrases in the clause appears on the dependents (= the noun phrases) themselves, as in: ()

Pitta-Pitta (Central Pama-Nyungan; Pama-Nyungan: Australia) kan̩a-lu matjumpa-n̪ a pit̪i-ka man-ERG roo-ACC kill-PST ‘The man killed the kangaroo.’

Not unexpectedly, the marking type exemplified in () is referred to as dependent marking (i.e. the dependents are marked). Thus, according

1

Less commonly found are languages with both prepositions and postpositions, and even much less commonly attested are languages with inpositions, which appear inside the noun phrase. For details on inpositions, see Dryer (c). For the sake of simplicity, these languages will be ignored here.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY

to the locus of grammatical-role marking in the clause, languages are classified into the head-marking or dependent-marking type (e.g. Nichols and Bickel a). Swahili is a head-marking language, whereas Pitta-Pitta is a dependent-marking language.2 While linguistic typology shares the same conceptual basis of classification with the Chinese menu described at the beginning of this section, that’s where their similarity ends. Unlike the menu, linguistic typology goes well beyond the business of classifying its objects of inquiry (i.e. languages). In fact, the more fruitful part of linguistic typology begins only after the establishment of a typological classification of languages. For concreteness, take adpositions again. Broadly speaking, the world’s languages can be classified into either the prepositional or the postpositional type—note that, if a language employs both prepositions and postpositions, it may still be possible to say that the language is either predominantly prepositional or predominantly postpositional. If the investigation stops at this point, doing linguistic typology will be rather bland, if not boring, although that investigation in itself may be a rewarding experience. Still, it is hardly the most exciting thing to point to the mere existence of prepositional languages and postpositional languages in the world without saying any more. This is why Joseph Greenberg (–), regarded as the father of modern linguistic typology, wrote: ‘The assignment of a language to a particular typological class becomes merely an incidental by-product and is not of great interest for its own sake’ (a: ). So, what can be done to make it intellectually stimulating and challenging? This is what needs to be done. One must make every effort to analyse or interpret a given typological investigation; one must attempt to account for what the typological investigation has produced, a linguistic typology of X, with a view to finding out about the nature of human language. To wit, one must make sense of the ‘incidental by-product’. But how does one make sense of the linguistic typology of adpositions, for instance? This particular question can be formulated in such a way as to ascertain what factors motivate the use of prepositions or postpositions. Asked differently, do languages choose randomly or systematically between prepositions and postpositions? This question, in turn, may lead to 2 For the sake of simplicity, languages with both head- and dependent-marking or with neither or with marking appearing elsewhere in the clause will be ignored. For details, see Nichols (), and Nichols and Bickel (a, b, c).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

1. 3 L I N G U I S T I C T Y P O L O G Y : A S H O R T H I S T O R Y

further investigation of adpositions. Indeed, linguistic typologists have discovered that the use of prepositions or postpositions is not an independent phenomenon. There is a remarkably strong correlation between the types of adpositions and basic clausal word order (specifically the position of the verb in the clause) to the extent that one can confidently state that verb-initial languages (i.e. verb appearing before subject and object in the clause) have prepositions, and verb-final languages (i.e. verb appearing after subject and object in the clause) postpositions.3 In other words, what seem to be two logically independent properties (i.e. adpositions and basic word order) correlate with each other to the point of statistical significance (for further discussion, see Chapter ). Imagine that linguistic typologists do not venture beyond the mere classification of languages into the two adpositional types—that is, beyond the level of an incidental by-product. They will not be able to discover the correlations in question, let alone to find themselves in a position to pose a far more intriguing, albeit challenging, question: why do these correlations exist in the first place? That is, what is it that makes verb-initial and verb-final languages opt for prepositions and postpositions, respectively? Other related questions may include: what is so special about the position of the verb that has a bearing on the choice between prepositions and postpositions? Is there something shared by the position of the verb and the type of adposition that it correlates with? Asking questions such as these is important because they may lead to an understanding of the nature of language. Furthermore, such an understanding may provide useful insights into the way the human mind shapes language or, more generally, how the human mind works (see Chapter ).

1.3 Linguistic typology: a short history The prospect of being able to predict the presence of one property (e.g. the types of adposition) on the basis of the presence of another property (e.g. the position of the verb in the clause) is a very attractive one. First and foremost, this elevates linguistic typology to a different level of investigation: so-called implicational typology. Even better will be the 3

Verb-initial and verb-final order have subsequently been generalized to VO and OV order, respectively. Thus, VO order prefers prepositions and OV order postpositions.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY

presence of one property (X) implying the presence of multiple properties (Y, Z, etc.). For instance, there may be properties other than the use of prepositions or postpositions that verb-initial or verb-final order correlates with. Indeed, verb-initial order is also known to prefer Noun– Genitive order (e.g. the residence of the Prime Minister) to Genitive–Noun order (e.g. the Prime Minister’s residence), whereas verb-final order also has a strong tendency to co-occur with Genitive–Noun order (for further discussion, see Chapter ). Generally speaking, it is preferable to predict as much as possible on the basis of as little as possible. This situation is akin to generating multiple units of energy (e.g. electricity) by using one unit of energy (e.g. biofuel). One cannot fail to see an economic dimension at work here. The idea of using the presence of one property to draw inferences about that of other properties is not something that has recently come to the attention of linguistic typologists. It has actually been around for a very long time. Over a hundred years ago, the German scholar, Georg von der Gabelentz (–), was so excited about this idea that he expressed his thinking somewhat overenthusiastically: Aber welcher Gewinn wäre es auch, wenn wir einer Sprache auf den Kopf zusagen dürften: Du hast das und das Einzelmerkmal, folglich hast du die und die weiteren Eigenschaften und den und den Gesammtcharakter!—wenn wir, wie es kühne Botaniker wohl versucht haben, aus dem Lindenblatte den Lindenbaum construiren könnten. (Gabelentz : ) [But what an achievement would it be were we to be able to confront a language and say to it: ‘you have such and such a specific property and hence also such and such further properties and such and such an overall character’—were we able, as daring botanists have indeed tried, to construct the entire lime tree from its leaf. (translation by Shibatani and Bynon b: )]

This statement, often cited in textbooks on linguistic typology, is probably the earliest articulation of linguistic typology, as is currently understood and practised. Indeed, ‘[i]t would be difficult to formulate the research programme of linguistic typology more succinctly’ than Gabelentz did (Plank : ). Moreover, it was Gabelentz (: ) who coined the term ‘typology’ when he continued to write on the same page: ‘Dürfte man ein ungeborenes Kind taufen, ich würde den Namen Typologie wählen’ [If one were permitted to christen an unborn child, I would choose the name typology]. The term ‘typology’, usually in conjunction with a modifying expression ‘linguistic’, is now used to refer to the subject matter of the present book. Though the term 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

1. 3 L I N G U I S T I C T Y P O L O G Y : A S H O R T H I S T O R Y

‘typology’ began its life with Gabelentz’s coinage at the turn of the twentieth century, linguistic typology itself has a much longer history, dating back to the eighteenth or even the seventeenth century. Needless to say, it is not possible to say exactly when linguistic typology began as a scholarly approach to the study of language. Like other scholarly approaches or scientific disciplines, linguistic typology underwent what Ramat (: ) aptly calls ‘the incubation phase’, during which scholars pondered over problems or issues in general terms without realizing what they were thinking or writing about would eventually contribute to the development of an innovative way of investigating their objects of inquiry. One early example from this incubation phase of linguistic typology is the French abbot Gabriel Girard, who proposed a distinction between ‘analogous’ and ‘transpositive’ languages in the mid-eighteenth century: analogous languages have what we now call subject–verb–object order, while transpositive languages have different or even free word order. Moreover, analogous languages are claimed to mirror the ‘natural’ order of the thought process (the agent [= subject] exists, (s)he then does something [= verb], and, as a consequence, the patient [= object] is affected by the agent’s action). By contrast, transpositive languages do not reflect the ‘natural’ order of the thought process, displaying different or altered word orders, that is, word orders other than subject–verb–object. Moreover, analogous languages generally lack inflected forms, while transpositive languages are rich in inflected forms. Thus, a correlation can be drawn between these two typological properties: clausal word order and inflectional morphology. This correlation makes sense because in transpositive languages inflectional morphology encodes the distinction between subject and object (cf. head- and dependent-marking), thereby allowing the word order to deviate from the ‘natural’ order of the thought process. By contrast, the lack of inflectional morphology in analogous languages makes it important that the word order in these languages reflect the ‘natural’ order of the thought process. While it was more speculative than empirical, Girard’s study can perhaps be regarded one of the earliest instances of implicational typology since the presence of one property was utilized to draw inferences about the presence of another property. Put differently, Girard’s work was more than a classification of languages because of its attempt to explore a structural principle ‘that was capable of deeply characterizing languages from the point of view of ’ the nature of human language (Ramat : ). For this reason, Ramat 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY

(: ) goes so far as to say that the French abbot and his immediate followers can be considered the true founders of linguistic typology, although Graffi (: ) is of the view that Gabelentz should be regarded as the originator of linguistic typology, not least for his coinage of the term ‘typology’ and succinct description of linguistic typology. The reader is referred to Ramat () for further examples of the incubation phase in the seventeenth and eighteenth centuries of linguistic typology. Regardless of who was its founder, linguistic typology has been shaped over the centuries by many scholars, some of whom had lived before Gabelentz’s christening of the discipline. In common with other disciplines, the manner in which linguistic typology was conceptualized and developed was influenced by the intellectual milieu of particular historical periods in which it found itself. In the seventeenth century, the dominant intellectual movement was that of Rationalism (also known as the Enlightenment). The main driver of this intellectual movement was reason, instead of faith or tradition. Recall that the abbot Girard’s typology was based on the natural order of the thought process. This order, in turn, was claimed, through logical reasoning (or rationalism), to be the human mind’s way of thinking (or the natural flow of thought, as it were). Needless to say, this was highly speculative, since it was not backed by empirical evidence; rationalism alone would suffice for scholars of this intellectual background. Note that the focus of linguistic typology of this historical period was on the human mind’s natural (read: universal) way of thinking (i.e. unity), not so much on how different languages might or might not reflect the human mind’s way of thinking (i.e. diversity). In the eighteenth and nineteenth centuries, the dominant intellectual milieu, in reaction largely to Rationalism, changed to Romanticism: emphasis was placed on human emotion or experience instead of reason. Not unexpectedly, language was also regarded as a human experience. For instance, speakers of different languages may have different experiences, which, in turn, may explain why they speak different languages in the first place. Scholars of this intellectual orientation went so far as to believe that language possessed an ‘inner form’. The inner form, in turn, was thought to be a manifestation of the spirit of the people (Volksgeist) who spoke the language (Greenberg : ch. ). In the words of Wilhelm von Humbolt (Finck ; Lehmann’s (c: ) translation), ‘[t]he characteristic intellectual features and 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

1. 3 L I N G U I S T I C T Y P O L O G Y : A S H O R T H I S T O R Y

the linguistic structure of a people stand in such intimacy of fusion with each other that if the one were presented the other would have to be completely derivable from it’. To wit, ‘each language [was] a distinct revelation of the spirit (Geist)’ (Greenberg : ). The inner form was also assumed to be reflected in ‘variation in grammatical mechanisms employed in relating lexical concepts to each other [or relational meaning]’ (Shibatani and Bynon b: ). This point of view led to the emergence of August von Schlegel’s morphological typology, in which three basic strategies in the encoding of relational meaning were recognized: inflectional, agglutinative, and isolating—Wilhelm von Humboldt later added a fourth, incorporating, to Schlegel’s tripartite classification.4 In inflectional languages, a single morpheme bears more than one meaning or represents more than one grammatical category. In agglutinative languages, a word may consist of more than one morpheme and may not be impervious to the conventional morpheme-by-morpheme analysis. In isolating languages, there is an equivalence relationship between word and morpheme with the effect that a morpheme can be taken to be a word or vice versa. In incorporating languages, the verb and its argument(s) may readily combine to create compound words. The unit of analysis in the morphological typology was undoubtedly the word, the structure of which ‘was seized upon as in some sense central to the attempt to characterize the language as a whole’ (Greenberg : ) so that ‘the description of the entire grammatical system [could] be annexed to an exact description of the structure of the word in every language’ (Lewy : ). Unlike the focus of Rationalism, that of Romanticism was laid squarely on linguistic diversity, and this interest in linguistic diversity was heightened by the discovery of ‘exotic’ languages outside the Old World through European imperial expansion (e.g. missionaries and traders). Unfortunately, the morphological typology came to be interpreted in highly subjective, evaluative terms with the effect that inflectional and isolating languages were regarded as the most and the least developed languages on an evolutionary scale, respectively. (This valueladen view relates to the development of so-called ethnopsychology Comrie (: ), on the other hand, adopts ‘fusional’ in lieu of ‘inflectional’ because ‘both [agglutinative] and fusional languages, as opposed to isolating languages, have inflection, and it is . . . misleading to use a term based on (in)flection to refer to one only of these two types’. 4



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY

(Völkerpsychologie), which emphasized the relationship between language and thought.) The nineteenth century witnessed the emergence of a profound intellectual paradigm, namely Darwinism. As is well known, this is a profound theory of biological evolution, proposing that all species of organisms emerge and develop through the natural selection of variations that individuals inherit, thereby enhancing their ability to compete, survive, and reproduce. This theory had an enormous impact on nineteenth-century scholars’ view of language, as Bopp (: ) avers: Languages must be regarded as organic bodies [organische Naturkörper], formed in accordance with definite laws; bearing within themselves an internal principle of life, they develop and they gradually die out, after . . . they discard, mutilate or misuse . . . components or forms which were originally significant but which have gradually become relatively superficial appendages. (as translated by Sampson : )

Under this Darwinian view, languages behave like biological species. (Try to read Bopp’s remarks, with ‘languages’ replaced with the names of animals or plants.) Thus, just as biological species do, languages emerge, develop into different varieties, compete with other varieties or languages, and cease to exist. This apparent similarity was so remarkable that the linguist’s language families, languages, dialects, and idiolects were thought to correspond to the biologist’s genera, species, varieties, and individuals, respectively (Sampson : ). This evolutionary view of languages stood in stark contrast with the preceding Romanticist view, which was more humanities-based than scientific in that the latter regarded language as a subjective human experience, not as an entity to be described objectively as part of the natural world (Sampson : ). The Darwinian view of languages gave rise to a historical approach to the study of languages; not unexpectedly, the focus of this approach was based on the historical development or evolution of languages and the genealogical relationships among languages. In point of fact, this approach was strongly upheld at the time as the only natural one for the study of language and survived into the early twentieth century, as Meillet () went so far as to claim that ‘the only true and useful classification [of languages] is genetic [i.e. genealogical]’ (Greenberg : ). The historical approach was also augmented by mechanistic physics, another important scientific paradigm of the nineteenth century, albeit not as influential on the study of language as Darwinism. In 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

1. 3 L I N G U I S T I C T Y P O L O G Y : A S H O R T H I S T O R Y

mechanistic physics, all natural phenomena could be understood by way of simple, deterministic laws of force and motion ‘so that all future states of the world could in principle be inferred from a complete knowledge of its present state’ (Sampson : ). (This is what Bopp had in mind when he mentioned ‘definite laws’ in his remarks cited above.) When extended to the study of languages, what this entails is that the history of language can also be described by simple, deterministic laws of sound changes. Scholars who fully embraced the two paradigms in their study of languages were so-called Neogrammarians (Junggrammatiker) in the latter half of the nineteenth century. They held the view that it was possible to ‘found a genuine science of laws [read: a scientific theory of language] based on rigorous methods and the discovery of sound law rested on historical comparison’ (Greenberg : ). Their primary objective was to discover sound laws that could account for the historical development of languages from their ancestors and their genealogical relationships. Thus, it comes as no surprise that unrelated languages fell outside the purview of Neogrammarians’ research by default, with the lack of genealogical relatedness leaving nothing to be taken care of by sound laws in the first place. Indeed, ‘[these scholars] saw no point to the comparison of unrelated languages’ (Greenberg : ); linguistic diversity, one of the primary pursuits of linguistic typology, was not on their research agenda. Needless to say, the exclusion of unrelated languages would considerably underrepresent the structural diversity of the world’s languages. Unfortunately, it was also assumed widely—albeit mistakenly—that typologically related languages were genealogically related as well. The dominant typological classification of this period was none other than the morphological typology mentioned earlier in relation to Romanticism; the morphological typology was, in fact, almost synonymous with linguistic typology during this period. So much so that anyone questioning or denying the connection between genealogical and typological relatedness was regarded as ‘“heretical” (Ketzerei) at the time’ (Greenberg : ). Thus, linguistic typology was claimed to be of little or no use, because it could be subsumed, as it were, under the historical approach. Moreover, linguistic typology was not thought to be scientific enough as it could not reduce the nature of language to principles akin to Neogrammarian sound laws. Thus, linguistic typology came to be largely ignored in the latter half of the nineteenth century, falling by the wayside into near oblivion. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY

The first half of the twentieth century witnessed the rise in linguistics of structuralism (Saussure ), which brought about two major changes in the study of language. First, the focus shifted from diachrony, i.e. linguistic changes over time, to synchrony, i.e. the state of a language at a specific moment in time. Second, as the consequence of the first change, the historical approach, dominant in the latter half of the nineteenth century, found itself on the wane (for a useful discussion, see Sampson : –).5 These changes notwithstanding, the indifference to linguistic typology continued well into the first half of the twentieth century. Nonetheless, a small number of linguists in Europe as well as in the US maintained their interest in linguistic typology—in particular those associated with the Prague School in Europe.6 The European typologists were focused on what Greenberg (: –) calls the generalizing goal of linguistic typology, that is, ‘the discovery of law-like generalizations in languages by drawing bounds to the concept “possible human language”’. Because of their anthropological—one may say, even Humboldtian—orientation, the few American typologists (especially Edward Sapir : ch. ), in contrast, were interested in discovering structural characteristics of individual languages or what Greenberg (: ) refers to as the individualizing goal of linguistic typology. The reader may recall from the earlier discussion that the generalizing goal is somewhat akin to what seventeenth-century Rationalist philosophers of language had in mind (i.e. similarity), while the individualizing goal can be related to Romanticist scholars’ emphasis on individual languages as revelations of the spirit of their speakers (i.e. diversity). While the European typologists were well aware—at least far more so than the American typologists— of the connection between these two goals, especially in the form of implicational typology, it was not until the s that serious attempts began to be made in linguistic typology in order to bring to the fore the linking of these two goals. The catalyst in this regard came when the Second World War forced some European linguists, Greenberg (: ) reports that the first occurrence of the word ‘typology’ in the linguistics literature was in the theses presented by the Prague linguists to the First Congress of Slavic Philologists held in . 6 The majority of American structural linguists (e.g. Leonard Bloomfield) were not interested in linguistic typology because of its previous association with ethnopsychology (i.e. language in relation to thought), which, to their minds, was not suitable for empirical study. They were instead in favour of behaviouristic psychology, which deals with observable behaviour, not the unobservable in the mind. 5



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

1. 4 C O N C L U D I N G R E M A R K S

including the Russian linguist Roman Jakobson (–), to flee or migrate to the US, where they (re)introduced linguistic typology, in particular implicational typology, to American linguists. However, linguistic typology went largely ignored, as it had been in Europe in the preceding century, until Joseph Greenberg (b) used the concept of implicational typology to investigate word order correlations, as briefly illustrated earlier in this chapter, in a relatively large number of languages, thereby elevating implicational typology in particular and linguistic typology in general to new heights. In other words, it was Greenberg who married up the two goals of linguistic typology more successfully than his predecessors and showed how to do implicational typology. In doing so, he ‘opened up a whole field of [linguistic– typological] research’ (Hawkins : ), revitalizing or resurrecting linguistic typology as a viable scientific approach to the study of language.

1.4 Concluding remarks In this chapter, a brief description of how linguistic typology is carried out has been provided with special reference to what kinds of research question are raised, and how those questions are answered. Also provided is a historical overview of linguistic typology with special emphasis on how it has evolved conceptually over the centuries.

Study questions 1. Rapanui (an Oceanic language spoken on Easter Island) is a verb-initial language in that the verb appears before both the subject and the object, while Turkmen (a Turkic language spoken in Turkmenistan) is a verb-final language in that the verb follows both the subject and the object. In §., the strong correlation between the verb position and the use of prepositions or postpositions was alluded to. According to that correlation, one may be able to predict that Rapanui and Turkmen use prepositions and postpositions, respectively. Check these predictions against the available data on Rapanui and Turkmen (e.g. Du Feu ; Clark ). 2. X and Y invested $, and $, in business, respectively. After a year in business, X and Y made $, and $, out of their investments,



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY

respectively. Whose business do you think is more profitable, and why? Now, imagine you have two possible situations in your implicational typology research. In situation , you can state with confidence that if a language has properties A, B, and C, it also has property D. In situation , you can state with an equal amount of confidence that if a language has property E, it also has properties D, F, and G. Which implicational statement do you think is more valuable, and why? How would you characterize the common ‘principle’ underlying profitability in business and value in implicational typology? 3. Morphological typology, the most popular way of classifying languages in the period of Romanticism, was also used to evaluate the world’s languages in terms of evolution. For instance, isolating languages were regarded as the most primitive or the least developed on the evolutionary scale. Even then, however, it was clear that Chinese flew right in the face of the evolutionary interpretation of the morphological typology. Why do you think this was so? 4. Ramat (: –) points to diachronic dynamics of language: languages shift from Type X to Type Y while retaining some features of Type X. What kinds of problem do you think this poses for the Gabelentzian attempt to construct the entire language (‘the entire lime tree’) from one of its structural properties (‘its leaf ’)? Further reading Graffi, G. (). ‘The Pioneers of Linguistic Typology: From Gabelentz to Greenberg’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Greenberg, J. H. (). Language Typology: A Historical and Analytic Overview. The Hague: Mouton. Horne, K. M. (). Language Typology: th and th Century Views. Washington, DC: Georgetown University Press. Ramat, P. (). ‘ Typological Comparison: Towards a Historical Perspective’, in M. Shibatani and T. Bynon (eds.), Approaches to Language Typology. Oxford: Oxford University Press, –. Ramat, P. (). ‘ The (Early) History of Linguistic Typology’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Song, J. J. (). ‘Linguistic Typology’, in K. Malmkjær (ed.), Routledge Linguistics Encyclopedia, rd edn. Abingdon: Routledge, –.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Part I Foundations: theory and method

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2 Unity and diversity in the world’s languages

2.1 Introduction



2.2 The connection between diversity and unity



2.3 The Principle of Uniformitarianism: a methodological frame of reference



2.4 When and where similarities count



2.5 Types of language universals and universal preferences



2.6 Concluding remarks



2.1 Introduction References were made to Rationalism and Romanticism in Chapter , where the history of linguistic typology was discussed. These two schools of thought dominated the intellectual landscape in early modern Europe, the former in the seventeenth century and the latter in the eighteenth century and the first half of the nineteenth century. Not unexpectedly, they both had an enormous impact, in their respective historical periods, on how language was conceptualized as well as on how the study of language was approached and conducted. From these intellectual traditions emerged two overarching, albeit seemingly contradicting, research themes in the study of language: unity and diversity of human language. It was the unity of language that Rationalism highlighted, with particular reference to the human mind’s natural (read: universal) way of thinking. Within Romanticism, in contrast, scholars drew attention to the diversity of human language, with language now

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

conceptualized as a manifestation of the spirit of its speakers—that is, different languages as reflections of different experiences. As will be shown in this chapter, these two themes continue to play an important role in modern linguistic typology, from Jakobson and Greenberg to the present day. While they are equally important concepts in linguistic typology (and indeed in any study of language for that matter), the pendulum has over the decades swung between the two themes. It was Jakobson and Greenberg who perceived the connection in linguistic typology between diversity and unity, while recognizing the search for the unity of human language as a loftier research goal than the diversity of human language. For instance, Jakobson (: ) remarked: ‘Linguistic typology is an inference from the science of languages [read: diversity] for the science of language [read: unity].’ In other words, it is through the diversity of the world’s languages that one can arrive at the discovery of the unity of human language. Similarly, Greenberg (: –) explains how ‘law-like generalizations in languages’ (read: unity) can be discovered on the basis of typological classifications (read: diversity) ‘by drawing bounds to the concept “possible human language”’. So much so that a typological classification is reduced to ‘an incidental by-product [that] is not of great interest for its own sake’ (Greenberg a: ). Put differently, diversity is what ‘provides material for establishing [unity]’ (Sanders : ; Mallinson and Blake : ). In the early stage of modern linguistic typology, therefore, the focus was on the discovery of language universals, which place the bounds on what is possible in human language. This ‘subjugation’, as it were, of diversity to unity in linguistic typology, however, has recently been rethought to the point of diversity attracting an increasingly greater amount of attention, especially in view of the realization that absolute language universals (i.e. all languages have X or there are no languages that lack X) are few and far between, and that it is, in point of fact, unrealistic to find them (e.g. Dryer ). So much so that linguistic typology should instead strive to discover what is probable, rather than what is possible, in human language. When the focus is placed on discovering what is possible vs impossible in language, the concept of language universal is based on the unity of human language (i.e. what is impossible in language will not be found in any known languages). In contrast, when the focus is shifted to what is probable in language, the concept of language universal must be attenuated to the effect that while X may be found in the majority of 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2.1 I N T R O D U C T I O N

the world’s languages, there may also exist languages that possess Y or Z instead of X. In other words, attention is drawn to diversity, albeit with an eye to unity, since the majority of the world’s languages may still fall into the type of X to the effect that one can still speak of a universal preference for X. This fundamental change in perspective, in turn, highlights the structural variation in the world’s languages (e.g. not only X, but also Y and Z). More recently, the pendulum has swung even further in favour of diversity (e.g. Bickel ; Evans and Levinson ). In view of the unrealistic goal of discovering absolute language universals and the focus on ‘what is probable in human language’, it is now even more important to find out not only why the majority of the world’s languages have X (i.e. universal preferences), but also why a small number of languages may have Y or Z in opposition to the universally preferred X—or, as Bickel (: ) puts it, ‘what’s where why?’. This has motivated linguistic typologists to step outside the domain of language and explore historical, social, and cultural (i.e. non-linguistic) factors to account for the existence of languages that do not behave like the majority of the world’s languages. For instance, languages with Verb–Object order (e.g. English, as in (a), where kissed is V and the man is O) have a very strong tendency to place a Relative clause (enclosed in brackets) after the head Noun (underlined) that it modifies (e.g. English, as in (b)), i.e. NRel order. ()

English (Germanic; Indo-European: UK) a. The woman kissed the man b. The woman [who kissed the man] is my sister.

Until recently, indeed, it was widely believed or assumed that VO languages always had NRel order. Then it was discovered (Dryer , e; but cf. Mallinson and Blake : ) that there are a small number of VO languages with RelN order in mainland China and Taiwan (i.e. Mandarin Chinese, Bai, and Amis), as exemplified in (). ()

Chinese (Sinitic; Sino-Tibetan: China) a. tāmen tōu zìxíngchē PL steal bicycle ‘They steal bicycles.’ b. [wǒ gěi nǐ de] shū SG give SG LINK book ‘the book that I gave you’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

In other words, what was believed to be an absolute language universal (i.e. VO & NRel) has turned out to be slightly less than an absolute language universal. Note, however, that there is still a clear universal preference or a near language universal (Dryer , e): VO languages almost always have NRel order—.% of  VO languages sampled in Dryer (e) have NRel order indeed. Linguistic typologists have proposed that the uncommon combination of VO & RelN, as opposed to the global preponderance of VO & NRel, was due to Chinese having been influenced by OV languages with RelN order, located to the north of China (e.g. Mongolian and Tungus) and, subsequently, Chinese, now equipped with RelN order, influencing other VO languages that it came into contact with (Hashimoto ; Dryer ). Clearly, this structural similarity between Chinese and the neighbouring northern languages has its origins in historical, social, and/or cultural circumstances that brought them together and should thus be treated as contact-mediated.

2.2 The connection between diversity and unity The area of linguistic typology dealing with diversity investigates the structural variation in the world’s languages (i.e. everything that is attested), whereas the area of linguistic typology concerned with unity focuses on the discovery of language universals (i.e. what is possible) and universal preferences (i.e. what is probable). Language universals impose constraints or limits on structural variation within human language, while universal preferences delineate the dominance of some structural types over others. Typological investigation, in contrast, is concerned with classifying languages into different structural types. Thus, ‘it may seem to the uninitiated something of a contradiction in terms to handle these apparently quite distinct areas of investigation together’ (Mallinson and Blake : ; Greenberg : ). But, as may be gleaned from the preceding discussion, the contradiction is more apparent than real. The search for the unity in human language, in fact, builds crucially on the structural diversity in human language. This is because in order to discover language universals or universal preferences, what linguistic typologists first need is typological classification to work on. With languages classified into different types, linguistic typologists may be able to discern patterns or regularities 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2.2 T H E C O N NE C T I O N B E T W E E N DI V E R S I T Y AN D U N I T Y

in the distribution of the types, for example, with some types being significantly more common than others, or with one (or more) of the logically possible types completely unattested or only marginally attested in the world’s languages. Put simply, ‘[i]n order to understand Language, it is essential to understand languages’ (Comrie : ). This close relationship between language universals and universal preferences on the one hand, and linguistic typology on the other can be demonstrated by the preponderance of subject-initial order in the world’s languages. According to Dryer (a), for instance, nearly .% of his , sample languages have subject-initial word order, i.e. Subject–Object–Verb or Subject–Verb–Object. If the world’s languages—or at least a significant portion of them—had not been surveyed in terms of all possible word orders (that is, not only subject-initial but also verb-initial and object-initial), this strong tendency would never have been brought to light in the first place. To put it differently, the typological classification of the world’s languages in terms of the six word orders—(i) Subject–Object–Verb, (ii) Subject– Verb–Object, (iii) Verb–Subject–Object, (iv) Verb–Object–Subject, (v) Object–Verb–Subject, and (vi) Object–Subject–Verb—is a sine qua non for the significant generalization to be made about human language, that is, the majority of the world’s languages have subject-initial word order. Imagine the prospect of discovering the universal preference in question by examining only one language or even a handful of languages! This may be too extreme an example but the point could not be made more strongly. Further demonstration of the fruitful interaction between unity and diversity comes from another strong linguistic preference (albeit of an implicational nature): the presence of verb-initial order implying that of prepositions (see §.). This implicational statement also entails what is not possible in human language, namely the absence of verbinitial languages with postpositions (but see §.). Thus, the implicational statement does not only sanction verb-initial languages with prepositions as possible human languages but it also rules out verbinitial languages with postpositions as impossible human languages. Moreover, by making no negative claims about non-verb-initial languages, the implicational statement refers indirectly to the other two logical possibilities, namely non-verb-initial languages either with prepositions or with postpositions. In order to arrive at the actual formulation of this implicational statement, however, one must first ascertain 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

which of the four logical possibilities is attested or unattested in the world’s languages. That can be achieved only on the basis of an initial typological classification of the languages of the world in terms of basic word order as well as the distribution of prepositions and postpositions. To wit, the search for the unity in human language is conducted on the basis of the structural diversity in human language. It is not possible to carry out the former without the latter. The interaction between unity and diversity also highlights one of the virtues of formulating language universals or universal preferences on the basis of typological classification. Typological classification naturally calls for data from a wide range of languages (see Chapter  on how languages are selected or sampled for this purpose). Only by working with such a wide range of languages is one able to minimize the risk of elevating some of the least common structural properties to the status of language universals. This risk is more real than some linguists may be willing to admit because, when deciding to work with a small number of familiar or well-known languages (for whatever reasons), one is likely to deal with structural properties which may not in any real sense be representative of the world’s languages.1 For instance, use of relative pronouns is very common in European languages but it has turned out to be a cross-linguistically infrequent phenomenon (Comrie : ; Comrie ; Comrie and Kuteva c). Therefore, universal claims about, or universal theories of, relative clauses which are put forth on the basis of these European languages alone should immediately be suspect.

2.3 The Principle of Uniformitarianism: a methodological frame of reference Strictly speaking, to claim that something is a language universal or a universal preference is patently premature, if not meretricious, when one thinks about the current level of documentation among the world’s languages. There are reported to be nearly , languages in the world 1 For instance, Chomsky (: ) avers: ‘I have not hesitated to propose a general principle of linguistic structure on the basis of observation of a single language . . . Assuming that the genetically determined language faculty is a common human possession, we may conclude that a principle of language is universal if we are led to postulate it as a “precondition” for the acquisition of a single language.’



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2.3 T H E P R I N C I P L E O F U N I F O R M I T A R I A NI S M

(Ethnologue, ), but unfortunately, ‘[l]ess than % of these languages have decent descriptions (full grammars and dictionaries)’ (Evans and Levinson : ). (Incidentally, this highlights the urgent need to document languages before they disappear into oblivion (e.g. Nettle and Romaine ; Grenoble and Whaley ; Evans ).) What this means is that language universals or universal preferences proposed so far are all based on less than % of the world’s languages, even if we assume, for the sake of argument, that all documented languages have been taken into consideration for every language universal or universal preference that has ever been proposed. Over % of the world’s languages, which await (proper) documentation, remain to be brought into the fold of linguistic research. Put simply, proposed language universals or universal preferences should never be understood to be established facts about languages but should instead be taken to be nothing more than observations or hypotheses based on what has been documented about the world’s languages. In other words, proposed language universals or universal preferences are in need of verification against further data as they become available. A case in point is one of the language universals proposed by Greenberg (b): ‘Languages with dominant VSO order are always prepositional’ (emphasis added). Subsequent research (e.g. Dryer : ), however, has revealed that this is not entirely correct. There are languages, albeit in small numbers, that have both VSO order and postpositions, e.g. Cora (Corachol; Uto-Aztecan: Mexico), Ewe (Kwa; Niger-Congo: Togo and Ghana), Finnish (Finnic; Uralic: Finland), Guajajara (Tupi-Guaraní; Tupian: Brazil), Kashmiri (Indic; IndoEuropean: Pakistan and India), Northern Tepehuan (Tepiman; Uto-Aztecan: Mexico), and Waray (Warayic; Gunwinyguan: Australia). When Greenberg proposed this universal—and his other universals for that matter—in the early s, he never intended them to be facts about the world’s languages, but rather generalizations based on the data available to him at the time. Put differently, Greenberg’s language universals were merely hypotheses or ‘summar[ies] of data observed to date’ (Dryer : ). In this respect—this is an important thing to remember—linguistic investigation is no different from any other scientific investigation. Every scientific statement, whether in physics or genetics, is simply a summary of data observed to date, or a generalization or a hypothesis based on that summary. More to the point, no scientific investigation can examine every instance of what it 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

is investigating. Typically, a small sample of the population under investigation is drawn for purposes of formulating a generalization or a hypothesis, which will continue to be tested against further data as they are collected (see §. for sampling in linguistic typology). This important point should not be lost sight of when language universals or universal preferences are spoken of. To wit, the validity of language universals and linguistic preferences can only be strengthened or weakened by means of further empirical testing. In order to arrive at a proper understanding of the nature of human language, it is not sufficient to study only living languages—even if it is possible to include all living languages in one’s investigation. Ideally, linguistic typologists must study extinct languages as well. Indeed, linguistic typologists often study not only currently spoken languages but also extinct languages. This may perhaps strike one as odd, if not surprising, because one may expect typological classification to be concerned only with the currently spoken languages of the world. One may be inclined to think that language universals represent constraints or limits on structural variation within human language as it is, not as it was (or for that matter as it will be). Why do linguistic typologists also include extinct languages in their investigation? The assumption underlying this inclusion is what is generally known as the Principle of Uniformitarianism in linguistics (see Lass (: –, : –) for discussion thereof in the context of historical linguistics).2 Basically, what it means is that human languages of the past—or of the future for that matter—are not essentially different in qualitative terms from those of the present. This principle claims, therefore, that the fundamental properties of human language have remained invariable over time. There are believed to be no differences in evolutionary terms between languages of the past—as far back as one can go and claim the existence of human languages—and those spoken today. In other words, human language of today is at the same level of evolution as that of, say, , years ago. Thus, in order to come to grips with the full structural diversity of human language, researchers must also investigate not only living languages but also extinct ones. Imagine a possible situation This principle was first introduced into the study of language by Neogrammarians from the natural science thesis of Hutton and Lyell. Karl Brugmann is quoted as saying (Collinge : ): ‘[t]he psychological and physiological nature of man as speaker must have been essentially identical at all epochs.’ 2



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2.3 T H E P R I N C I P L E O F U N I F O R M I T A R I A NI S M

in which particular structural types unattested in living languages happen to have existed only in languages that are no longer spoken. The Principle of Uniformitarianism is, of course, something that has never been subjected to empirical verification and cannot be put to the test for obvious reasons; one simply cannot go back in time and examine languages spoken, say, , years ago to see whether or not they were qualitatively the same as those of today. Nor is there any logical reason why the principle should be correct. Nonetheless it plays an important role in linguistic typology (and equally in historical linguistics). The primary aim of linguistic typology is to discover universal properties or preferences of human language. If human languages were spoken , years ago, then, these languages must also be included in any typological study, which is utterly impossible. To get an idea of the linguistic diversity in the past, one can refer to Evans and Levinson (: ), who suggest that  years ago, before European colonization began, there were probably twice as many languages as there are now, and to Pagel (: ), who claims that over half a million languages have ever been spoken on this planet, if humans began talking , years ago and languages evolved at a rate of one per  years. In the absence of the Principle of Uniformitarianism, then, no typological analysis will be possible or, more accurately, complete simply because it is impossible to ‘recover’ all unrecorded, extinct languages from oblivion. With the Principle of Uniformitarianism in place, however, linguistic typologists can examine languages spoken today and, if and where possible, extinct but documented languages as well and can still make statements or generalizations about the nature of human language. Similar comments can also be made of languages of the future. Since it is expected that they will also be human languages, any typological study must in principle include them as well, which is out of the question. But the Principle of Uniformitarianism also works in the opposite direction of time from the present, thereby allowing linguistic typologists to extend to languages of the future what universal properties or preferences that they may have discovered on the basis of currently available data. While the complete structural diversity of human language will be impossible to capture, what is true of today’s human language can be assumed to be true of yesterday’s and tomorrow’s human language. After all, under the Principle of Uniformitarianism the nature of human language is assumed not to change qualitatively over time. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

There are also practical reasons why the Principle of Uniformitarianism is adhered to in linguistic typology. Without this principle, languages must be seen to evolve constantly as time passes by. But if languages were evolving through time, and were conceived of as being at different stages of linguistic evolution, grammatical descriptions that linguistic typologists employ for their research would be completely useless for typological research because they invariably—and inevitably—record languages at different points in time or at different stages of evolution, with some grammars being descriptions of languages of more than a few hundred years ago, and others being far more recent ones. The absence of the Principle of Uniformitarianism will also lead to the view—which incidentally is generally not accepted in linguistics— that some languages should be at a more advanced stage of evolution than others because one would not be able to claim that all human languages have evolved to the same level (see §. on an evolutionary interpretation of the morphological typology in the nineteenth century). If languages were at different stages of linguistic evolution, it would be impossible to engage in any typological research since one would (arbitrarily) have to target at one particular stage of evolution that all human languages have reached at one time or another, and to study all grammatical descriptions of the world’s languages at that stage of evolution (assuming, of course, that it is possible to select such a stage, and also to have access to all grammatical descriptions at once).3 To sum up, the Principle of Uniformitarianism provides a kind of frame of reference within which typological research can be carried out productively without being hindered unduly by the intractable methodological issue, which does not necessarily have to be resolved—and most probably never will—at the current stage of development of linguistic typology as an empirical approach to the study of language.4

2.4 When and where similarities count When studying the structural diversity of the world’s languages with a view to uncovering the unity of human language, linguistic typologists 3

A related question will be: which stage of evolution in human language should be chosen as the ‘target’ stage? 4 One may choose to use the descriptive label ‘an escape hatch’, rather than ‘a frame of reference’.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2. 4 W H E N A N D W H E R E S I M I L A R I T I E S C O U N T

must take care to separate language universals or universal preferences from structural similarities brought about by non-linguistic factors, e.g. historical accidents. Imagine a hypothetical world where there are , languages (loosely based on Dryer : –). In this world, there is one large language family of  languages, with the remaining  languages evenly distributed among ten small language families (i.e. ten languages in each of the small language families). All the  languages in the large family have type X, and the languages of the other ten small families all have type Y. Now, is it safe to conclude from this distribution that there is a universal preference for X over Y, since X is attested in  languages (%) while Y is attested in only  languages (%)? This answer is no, because of the fact that X is found in only one language family and Y in the remaining ten families: Y is attested in far more language families than X is. The fact that it could have been the other way round (i.e. X in ten small language families and Y in one large family) suggests that the distribution of X and Y in the hypothetical world’s languages may well be a historical accident, having nothing to do with what is or is not linguistically preferred. For further illustration, imagine that in the distant past, the large language family used to have ten languages, with the remaining  languages evenly distributed among the other ten language families (i.e.  languages per family). Through their superior technology, however, the size of the former language family subsequently increased exponentially at the expense of the latter language families. (Technologically advanced people have better access to natural resources, increasing not only their chances of survival but also the size of their territory; for a recent example, one can point to the spread of Europeans and their languages, e.g. English and Spanish, to other parts of the world.) Thus, the presence of X in the majority of the hypothetical world’s languages (i.e. the large language family) is caused by the technological superiority of their speakers, and certainly not by the universal linguistic preference for X over Y. From this, it is glaringly obvious that language universals or universal preferences cannot be established on the basis of structural similarities brought about by such historical accidents. This is why it is decided that X is not the universal preference in spite of the fact that it is attested in % of the hypothetical world’s languages. The reader may wonder, at this point, whether the preponderance of subject-initial word order in the real world’s languages, alluded to earlier as a universal preference, may also be the outcome of a similar historical accident, 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

namely the economic, social, and political domination of speakers of subject-initial languages over those of non-subject-initial languages. This is a valid point. Indeed, linguistic typologists have devised methods of ascertaining whether the global dominance of subjectinitial word order (or other structural properties for that matter) is a genuine universal preference or the outcome of a historical accident (for detailed discussion, see §.). There are three major ways languages come to have similar properties: (a) shared genealogical origin, (b) language contact, and (c) language universals or universal preferences. Linguistic typology is concerned primarily with (c), while not neglecting to pay attention to (a) and (b), especially when also explaining ‘exceptions’ to language universals or universal preferences. Thus, when and where unity is the focus of investigation, it is (c) that counts, and (a) and (b) have to be taken out of consideration. Needless to say, however, if the focus of investigation is ‘what’s where why’ à la Bickel (: ), (a) and (b) should also be brought into the picture. The make-believe example given above illustrates shared genealogical origin. The fact that there are  languages in the large language family with property X is due to the fact that these languages all derive historically from one ancestral language. In other words, languages may have similar structural properties because they have inherited them from their common ancestors. Languages may not begin their lives as independent languages but as dialects of single languages, e.g. French, Italian, Portuguese, Romanian, and Spanish, derived from Latin. In linguistics, this genealogical relationship is typically captured by analogy with a biological relationship, that is, parent and daughter languages. For instance, Latin is the parent language of French, Italian, Portuguese, Romanian, and Spanish, or conversely, French, Italian, Portuguese, Romanian and Spanish are the daughter languages of Latin. Further back in time, Latin was part of yet another ancestral language, and so on. Using what is known as the Comparative Method (e.g. Campbell ), historical linguists have classified the world’s languages into a number of language families (although they may disagree about some of them). For instance, English, French, Greek, Irish, Russian, Hindi, and Urdu all belong to one and the same language family called Indo-European; these modern languages are descendants of a single language spoken some , years ago, with its reconstructed name, Proto-Indo-European. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2. 4 W H E N A N D W H E R E S I M I L A R I T I E S C O U N T

The case of shared properties through language contact has already been alluded to when Chinese and its neighbouring languages were discussed earlier with respect to the universal preference for VO and NRel order. Chinese belongs to the Sino-Tibetan family, whereas the neighbouring northern languages belong to the Mongolic or Tungusic branch of the Altaic family. However, Chinese and these northern languages share structural properties including RelN order, in spite of the difference in their basic word order (for detailed discussion, see Dryer : –). In the case of Chinese, a VO language, its RelN order goes against the grain of the universal preference, i.e. VO & NRel, as it were. It has already been explained that Chinese adopted RelN order from the northern OV languages in preference to NRel order. This illustrates how languages of different genealogical origins may end up with common structural properties through contact, in opposition to language universals or universal preferences. Another example of this kind comes from the Asia Minor Greek dialects, spoken in the regions of Sílli, Cappadocia, and Phárasa. These Asia Minor Greek dialects were heavily influenced by Turkish through prolonged contact. Greek is an Indo-European language, while Turkish is a non-IndoEuropean language or a Turkic language. One of the Turkish structural features imported into the Greek dialects in question is RelN order. In Greek (), a VO language, the relative clause follows the head noun (i.e. NRel order), as expected of VO languages (i.e. the universal preference for VO & NRel), but in Sílli dialects (), for instance, the converse is frequently the case, that is RelN, just as in Turkish (), an OV language. ()

Greek (Hellenic; Indo-European: Greece) tò peðì pû to îða the boy COMP it saw-I ‘the boy who(m) I saw’

()

Sílli Greek (Hellenic; Indo-European: Turkey) ki ̯át íra perí COMP saw-I boy ‘the boy who(m) I saw’

()

Turkish (Turkic; Altaic: Turkey) gör-düg-üm oglan see-NOMN-.SG.POSS boy (son) ‘the boy who(m) I saw’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

Such contact-mediated structural similarities should not come into the picture when language universals or universal preferences are identified or delineated. If the political situation in the region in the fifteenth to early twentieth century had been different, i.e. Greek speakers’ domination over Turkish speakers instead of the converse, it could easily have been Turkish ‘borrowing’ NRel order from Greek. Languages may come to share structural properties through contact because speakers of languages may adopt them from other languages that they come into contact with, not necessarily because those properties are inherently preferred in human languages. Even if borrowed properties are universal preferences, the structural borrowing is brought about first and foremost through contact between languages. To wit, contact-mediated similarities also are due to historical accident, just as similarities through shared genealogical origins are. This leaves the third major way languages come to share structural properties: language universals or universal preferences. The world’s languages or at least the majority of the world’s languages may have such and such structural properties because they are due to the very nature of human language: all other things being equal, languages must have such and such properties because they are what makes human language what it is. For instance, there is a clear, strong tendency for VO languages to have NRel order. This correlation is attested in virtually all VO languages of the world (with a very small number of exceptions, which can be explained by reference to language contact, as has already been noted). This near-universal correlation must then have to do with the nature of human language: if a language opts for VO order, it must also opt for NRel order. The discovery of the nature of human language is the primary goal of linguistic typology, and in fact, any linguistic theory for that matter. The primary research goal is to uncover the nature of human language, with an eye to accounting for cases where the nature of human language may be ‘suspended’ or ‘put on hold’ because of overriding historical, social, or cultural circumstances.

2.5 Types of language universals and universal preferences Properties such as the preponderance of subject-initial languages in the world’s languages are often referred to as language universals in the 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2. 5 T Y P E S O F L A NG U A G E U N I V E R S AL S AN D U N I V E R S A L P R E F E R E N CE S

literature. Strictly speaking, however, language universals must be true of all languages. Under this strict definition of the term, the property of subject-initial order does not qualify as a language universal since it is only a tendency in human language, albeit a very strong one. In other words, only properties which all human languages have in common may be taken to be language universals. This is why the tendency for subject-initial order has been referred to in the present book as a universal preference, not as a language universal: a linguistic preference attested very commonly or widely in the world’s languages.5 The correlation between verb-initial word order and prepositions is another universal preference in that there are only a small number of languages with verb-initial order and postpositions, with the majority of verb-initial languages being prepositional. In the literature, language universals, as interpreted strictly as exceptionless, are referred to as absolute language universals, whereas universal preferences, admitting of a small number of exceptions, are called nonabsolute or statistical language universals. (Bear in mind that in this commonly used distinction, language universals are liberally interpreted as including not only absolute language universals but also universal preferences.) Absolute language universals are very hard to find. It is not the case that they do not exist. They certainly do. But they are not numerous and they tend to be ‘banal’ or ‘trite’ (Greenberg : , ). For instance, the fact that all languages have vowels has been proposed as an absolute language universal. This, however, is rather uninspiring. One may ask: all language have vowels, so what? It does not seem to lead to any further interesting questions about human language, except for the question as to why all languages must have vowels. More seriously, Evans and Levinson (: ) correctly point out that even this seemingly absolute language universal is not without exceptions (in this case, many), when sign languages are taken into account! Experience shows that what was proposed as a (possible) absolute language universal almost always turns out to have exceptions. The erstwhile absolute language universal involving VO and NRel order has already been mentioned as a similar case in point. Yet another example of 5 Nichols (: ) describes universal tendencies as properties or correlations favoured in languages independent of geography and genetic affiliation, and thus as universal preferences in the world’s languages.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

this kind comes from Greenberg (: ): all languages mark the negative by adding some morpheme to a sentence. However, Evans and Levinson (: ) point to Classical Tamil (Southern Dravidian; Dravidian: India and Sri Lanka) as a counterexample, as this language marks the negative by deleting the tense morpheme present in the positive. One may argue that Classical Tamil is only one counterexample among the world’s ,-odd languages and brush it aside. (At the same time, this shows that one exception is all it takes to turn an absolute language universal into a non-absolute statement.) However, one must realize that only less than % of the world’s languages have adequate descriptions. One cannot be sure whether there may be other yet-to-be-documented (or even yet-to-be-discovered) languages that behave exactly like Classical Tamil. More frequently than not, absolute language universals have been formulated on the basis of an even smaller number of languages. As more and more languages become documented and brought to the attention of researchers, new properties or strategies are very likely to show up, flying in the face of absolute language universals or universal preferences. Thus, it does not come as a surprise that absolute language universals are hard to come by, and virtually all absolute language universals claimed so far have turned out to be less than what they were initially thought to be. Moreover, one must bear in mind, as Evans and Levinson (: ) do, that ‘the relevant test set is not the , odd languages we happen to have now, but the half million or so that have existed, not to mention those yet to come [into existence]’. In view of this reality, not surprisingly, the focus of linguistic typology has recently been shifted from unity to diversity, as the clear message coming from the world’s languages, as more and more of them become documented, is that there is always going to be a language somewhere that will throw a typological curve ball, as it were. Thus, some linguistic typologists (e.g. Bickel ; Evans and Levinson ) argue that more effort should instead go into documenting the structural diversity of the world’s languages before advancing premature claims about the nature of human language. But at the same time, it must be borne in mind that making observations or hypotheses about human language on the basis of available data is legitimate business, not just in linguistic typology but also in other scientific disciplines. The fact that there are hardly any absolute language universals that can stand the test of time does not detract from the fact that there is a 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2. 5 T Y P E S O F L A NG U A G E U N I V E R S AL S AN D U N I V E R S A L P R E F E R E N CE S

substantial degree of unity in the world’s languages. Various universal preferences capture the very unity of human language. A small number of languages that deviate from this near unity may further reflect the structural diversity in the world’s languages, and must be accounted for in whatever way they can. As has already been pointed out on more than one occasion, however, these ‘deviations’ tend to have social, cultural, and/or historical reasons behind them. These non-linguistic reasons enable linguistic typologists to understand why property X exists in language Y at a particular point in time, in opposition to the overwhelming global tendency. In other words, it is important to ask why and how the ‘deviating’ languages have arisen. Moreover, it is important to answer these questions because in doing so, linguistic typologists will find themselves in a stronger position to strengthen the validity of universal preferences. When exceptions to universal preferences are addressed on their own terms, the conceptual as well as empirical strength of proposed universal preferences is by no means vitiated but rather increased to a greater extent than would otherwise be the case. This is because exceptions have valid reasons for being exceptions. Put differently, a small number of exceptions to universal preferences are not really counterexamples as such, but rather the outcome of non-linguistic factors ‘interfering’ with linguistic preferences (read: the unity of human language). In this respect, exceptions to universal preferences represent nonlinguistic variables that may override linguistic preferences in highly specific historical, social, and cultural contexts. As has been shown, linguistic statements about the nature of human language can be formulated by using two parameters: (a) absolute vs non-absolute; and (b) implicational vs non-implicational. Absolute statements are exceptionless by definition. An example of this type of universal is: all languages have ways to turn affirmative sentences into negative ones (e.g. James kicked the dog ! James did not kick the dog). Non-absolute statements—also known as statistical universals or, as in this book, universal preferences—are not without exceptions but the empirical strength of this type of statement far outweighs the number of exceptions that may exist. The stated preponderance of subjectinitial order in the world’s languages is a non-absolute statement. Various statistical methods are employed in order to determine whether or not a given tendency is statistically significant (see §.). Implicational statements take the form of ‘if p, then q’. The presence of one property (i.e. the implicans) implies that of another (i.e. the 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

implicatum). An example of this type of statement has already been provided: the majority of verb-initial languages are found to be equipped with prepositions. This can be rewritten as: if a language is verb-initial, then it is almost always prepositional. By design, implicational statements will be based on interaction of more than one property. Thus, there may also be implicational statements, involving more than two properties. One such example is Greenberg’s (b) Universal : if a language has dominant SOV order and the genitive follows the governing noun, then the adjective likewise follows the noun. In this implicational statement, two properties are needed to predict a third. It is also possible that the implicatum can be more than one property. Again, Greenberg (b) offers an example of this kind: if some or all adverbs follow the adjective they modify, then the language is one in which the qualifying adjective follows the noun and the verb precedes its nominal object as the dominant order (Greenberg’s Universal ). It is not difficult to see that other things being equal, implicational statements that predict the presence of multiple properties on the basis of a single property are more highly valued than those that predict the presence of a single property on the basis of multiple properties. Put differently, it is preferable to predict as much as possible on the basis of as little as possible (Moravcsik : ). By this criterion of economy, Greenberg’s Universal  is of more value than his Universal . The predicting power of implicational statements is not confined solely to the properties which they make explicit reference to. Thus, given the implicational statement ‘if a language is verb-initial, then it is almost always also prepositional’, there are two other situations that fall out from that statement (in addition to the near impossibility of verbinitial languages with postpositions). By making no claims about them, in effect, it has the advantage of saying something about non-verbinitial languages either with prepositions or with postpositions, thereby recognizing these types of languages also as possible human languages. In other words, the implicational universal in question rules out only verb-initial languages with postpositions as impossible human languages, albeit, in reality, as nearly non-existent—that is, p & –q (read: not q), contradicting the original statement of ‘if p, then q’. What is referred to as a tetrachoric table is often used to indicate clearly which of the logically possible combinations of two (or more) properties is allowed or disallowed, as in (). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2. 5 T Y P E S O F L A NG U A G E U N I V E R S AL S AN D U N I V E R S A L P R E F E R E N CE S

() VERB-INITIAL NON-VERB-INITIAL

PREPOSITIONS POSTPOSITIONS yes almost no — —

The tetrachoric table in () shows that the combination of verb-initial order and postpositions is a dispreferred option in human language. Thus, the implicational statement effectively serves as a constraint on possible variation within human language. If an implicational statement is an absolute language universal, one of the slots in the tetrachoric table will be ‘no’, instead of ‘almost no’, as ‘if p, then q’ implies the non-existence of p & –q. Non-implicational statements, on the other hand, do not involve the predicting of property X on the basis of property Y. They involve only a single property. In other words, they do not appear in the form of ‘if p, then q’. The preponderance of subject-initial word order is such a nonimplicational statement. Note that this particular statement is not only non-implicational but also non-absolute, thereby illustrating that statements may cut across the distinction between the absolute/non-absolute, and implicational/non-implicational parameters. Thus, in addition to non-absolute non-implicational statements, there may also be (a) absolute implicational statements, (b) absolute non-implicational statements, and (c) non-absolute implicational statements. An example of (c) has already been provided: if a language is verb-initial, it is almost always also prepositional. Absolute language universals, for the reasons given above, are hard to come by, but a possible example of (b) comes from the fact that all languages have ways to convert affirmative sentences into negative ones, and a potential example of (a) may be: if the general rule is that the descriptive adjective follows the noun, then there may be a minority of adjectives which usually precede, but if the general rule is that descriptive adjectives precede the noun, then there are no exceptions (Greenberg b: ). For further examples of absolute language universals and universal preferences, the reader is referred to the Universals Archive at http://typo.uni-konstanz.de/archive/intro/. Finally, absolute statements are also referred to as exceptionless, and non-absolute ones as statistical tendencies or, as in this book, universal preferences. Implicational statements are also known as restricted or conditional, while non-implicational statements are known as unrestricted or unconditional. These four logical types of universal statements are summarized in Table .. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

Table 2.1 Four logical types of universal statements Absolute (Exceptionless)

Non-absolute (Universal Preferences/Statistical Tendencies)

Non-implicational All languages have property X. Most languages have property X. (Unrestricted/Unconditional) Implicational (Restricted/Conditional)

If a language has property X, it also has property Y.

If a language has property X, it tends to have property Y as well.

2.6 Concluding remarks The two themes in the study of human language, i.e. unity and diversity, dating back to the historical period of early modern Europe, and possibly even further back in time, have played an important role in the development of linguistic typology over the last several decades. Thus, while in the s–s researchers focused on discovering the unity of human language on the basis of the documented structural diversity in the world’s languages, subsequent researchers came to the realization that in view of the low level of documentation of the world’s languages, making claims about the nature of human language is patently premature. Thus, the focus of linguistic–typological research has recently shifted to the identification or characterization of the structural diversity of human language, while calling for more and more languages to be documented before they disappear into oblivion. This new focus on diversity does not mean that linguistic typologists have lost their interest in the discovery of the unity of human language. Far from it. There are simply too many languages yet to be brought into the fold of linguistic–typological investigation, and, while it is important to make observations or hypotheses about the nature of human language, it is also a matter of urgency to pay more attention to what structural devices or options are attested in the world’s languages. This way, linguistic typologists will be in a much better position to understand what is the nature of human language, as newly discovered structural strategies or options will have a bearing on the understanding of what makes human language what it is. No doubt linguistic typologists will continue to make statements, whether language universals or universal preferences, about the nature of human language, but these statements 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

2. 6 C O N C L U D I N G R E M A R K S

must always be understood to be hypotheses or ‘summar[ies] of data observed to date’ (Dryer : ). Moreover, as they delve deeply into the structural diversity of the world’s languages, linguistic typologists will meet with exceptions to language universals or universal preferences, if past experience is anything to go by. By ascertaining why and how such exceptions arise in the first place, linguistic typologists will also be able to understand better under what historical, social, or cultural circumstances universal properties may have to be put on hold. This does not detract from the strength of the observed universal preferences but will instead increase their empirical validity, and ultimately the validity of what linguistic typologists have to say about the nature of human language.

Study questions 1. Consider the following (potential) absolute implicational language universal: if consonant clusters (i.e. the occurrence of more than one consonant in sequence, e.g. /s/, /p/, and /r/, as in the English word spring) occur in initial and final position of words, then these same consonant clusters also occur in medial position, between vowels. What logical possibilities does this universal statement allow as possible, and rule out as impossible, in human language? 2. Imagine that your typological investigation into inflection on nouns and adjectives (i.e. affixes expressing grammatical information, e.g. number) has produced the following data: (a) (b) (c) (d)

In some languages, both nouns and adjectives are inflected. In some languages, nouns are inflected while adjectives are not inflected. In some languages, neither nouns nor adjectives are inflected. In no languages, adjectives are inflected, while nouns are not inflected.

Formulate a universal statement that rules the three attested situations (a, b, and c) in and the unattested situation (d) out. 3. Two well-known instances of language contact come from the Balkan Sprachbund (Winford : –; Tomić , ch. ) and the Meso-American Sprachbund (Campbell, Kaufman, and Smith-Stark ). Sprachbund or linguistic area is a term used to characterize a geographical area in which languages of different genealogical origins have come to share structural properties through prolonged contact while their sister languages outside the area may not have these properties. Refer to the references cited above and find out what languages are involved in each linguistic area, what their



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

UNITY AND DIVERSITY IN THE WORLD’S LANGUAGES

genealogical origins are, and what are some of their shared properties, acquired through contact. 4. ‘[T]here is a general consensus that at least half of the world’s ,–, languages will disappear (or be on the verge of disappearing) in the next century’ (Grenoble and Whaley : ). This means that a language dies every two weeks. Think of some of the reasons why languages disappear at such an alarming rate. What implications would such a dramatic loss of languages have for the study of human language, and for linguistic typology in particular? Further reading Bickel, B. (). ‘ Typology in the st Century: Major Current Developments’, Linguistic Typology : –. Dryer, M. S. (). ‘ Why Statistical Universals Are Better than Absolute Universals’, Chicago Linguistics Society  (the Panels): –. Evans, N. and Levinson, S. (). ‘ The Myth of Language Universals: Language Diversity and Its Importance for Cognitive Science’, Behavioral and Brain Sciences : –. Greenberg, J. H. (). ‘On Being a Linguistic Anthropologist’, Annual Review of Anthropology : –. Jakobson, R. (). ‘ Typological Studies and Their Contribution to Historical Comparative Linguistics’, in E. Siversten, C. H. Borgstrøm, A. Gallis, and A. Sommerfelt (eds.), Proceedings of the Eighth International Congress of Linguists. Oslo: Oslo University Press, –. Song, J. J. (). ‘ What or Where Can We Do Better?: Some Personal Reflections on (the Tenth Anniversary of) Linguistic Typology’, Linguistic Typology : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3 Typological analysis

3.1 Introduction



3.2 ‘Comparing apples with apples’: cross-linguistic comparability



3.3 Comparative concepts vs descriptive categories



3.4 Concluding remarks



3.1 Introduction In order to study the nature of human language, linguistic typologists typically go through four stages of typological analysis: (i) identification of a phenomenon to be studied; (ii) typological classification of the phenomenon being investigated; (iii) the formulation of (a) generalization(s) over the classification; and (iv) the explanation of the generalization(s).

First, linguistic typologists must decide what they would like to investigate. There are, of course, no theoretical restrictions on what linguistic properties or phenomena should or should not be studied. Nor are there any restrictions on how many properties should simultaneously be studied at a given time. Some linguistic typologists may choose one property of language as an object of inquiry (e.g. the comparative construction, as in Stassen ), whereas others may at once probe into more than one (e.g. Object–Verb word order in conjunction with adpositions, as in Dryer c). But what we must exercise circumspection about is which of the properties selected for typological

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

analysis is actually worthwhile investigating, with some properties proving to be more interesting or revealing than others. In other words, some are more likely than others to lead to significant crosslinguistic generalizations. For instance, compare the typological property of basic word order with the use of question particles. As has already been alluded to, the selection of basic word order as a typological property has led to interesting questions and issues. But what about question particles? The world’s languages will be typologized into two types: those with question particles and those without. But what is there to be understood from this typological classification? There does not seem to be much more to be done or learned about it. It is difficult to imagine that this typological classification can be put to much use in understanding the nature of human language—unless perhaps it is studied in conjunction with some other structural properties. In a way, therefore, the first stage of typological analysis may depend crucially on the investigator’s insight or intuition to a great extent, just as in any kind of scientific endeavour, be it physics or biology. Furthermore, the first and second stages of typological analysis may have to be carried out concurrently to a certain degree. This is because, unfortunately, we do not know in advance whether the chosen property is going to be a typologically significant one. Thus, we may need to undertake an exploratory or preliminary investigation (e.g. using a small number of languages) into the selected property with a view to avoiding an uninteresting or trivial research outcome. Once a property (or properties) has (or have) been chosen for typological analysis, structural types pertaining to that property (or those properties) will be identified or defined so that the world’s languages can be classified into those types. Note that chosen properties must be defined beforehand in a language-independent manner so that what is being compared across languages is one and the same thing. (More on the problem of cross-linguistic identification will be discussed in §. and §..) There are basically two ways in which structural types can be recognized, depending on whether it is possible to enumerate the types of a given property prior to undertaking a survey of the world’s languages. For instance, take basic word order at the clausal level. The basic transitive clause consists of three main constituents, namely Subject, Object, and Verb. From these three constituents, six logically possible permutations can be deduced, namely SOV, SVO, VSO, VOS, OVS, and OSV, without a cross-linguistic survey actually 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3.1 I N T R O D U C T I O N

being done. The world’s languages are then surveyed in terms of these six logically possible word order types—that is, to ascertain which of the six word order types is attested or unattested in the world’s languages. (The six word order types are all attested in the world’s languages, albeit with varying levels of frequency; see §.. for detailed discussion.) Languages will be grouped as having SOV, SVO, and so forth. The identification of the six basic word order types and the classification into those types of the world’s languages will then constitute a linguistic typology of basic word order (at the clausal level). For other properties, it is not possible to determine in advance what the logically possible types of a given property may be; we must instead survey the world’s languages (or, more accurately, as many languages as possible) in order to find out what types actually exist. For instance, Stassen’s () cross-linguistic study concerns the formal encoding of comparison or the so-called comparative construction (e.g. Jill is taller than Debbie). Having defined the comparative construction in a language-independent way, Stassen proceeded to survey  languages in his sample in order to determine what structural strategies are utilized for the expression of comparison. This enabled him to identify five major strategies, each of which constitutes a type of the comparative construction. In this case, the researcher did not have any prior idea of how many types of the comparative construction might exist in his sample or in the world’s languages for that matter or what the types were (although he may have been well aware of the ones used in the languages that he was already familiar with). It was the actual survey of the sampled languages that identified the five major types of the comparative construction. Thus, the types of the comparative construction, unlike the six basic word orders, cannot be worked out deductively without actually surveying the world’s languages or, more realistically, a good sample of them. The third stage, in (iii), can be illustrated by the skewed distribution of the six word orders in the world’s languages. There is an unequivocal preference for subject-initial word order, although the six logically possible permutations of Subject, Object, and Verb are all attested in the world’s languages. This universal preference can then be taken to be a significant generalization over the data classified: the preponderance of subject-initial over verb-initial or object-initial languages. While this particular example involves the distribution of all the attested types in a word order typology, it is possible that the stage in (iii) may also involve 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

a contrast between attested and non-attested types. For example, when the Recipient argument and the Theme argument in the ditransitive construction (e.g. the man and a car in The woman gave the man a car, respectively) are compared with the Patient argument in the monotransitive construction (e.g. the bouncer in The drunken man kissed the bouncer) in terms of their similarities or differences in encoding, one of the five logically possible ‘alignments’,1 namely the so-called horizontal alignment pattern, has so far been unattested. This unattested alignment pattern represents a situation in which T and R are encoded identically but both treated differently from P (Siewierska ; Haspelmath ; see also Chapter ). While it is possible that languages with this alignment pattern may be discovered in future, the generalization on the encoding of T, R, and P must include a statement to the effect that, while a logically possible type, the horizontal alignment pattern is ruled out as unattested. Generalizations formulated in the third stage of typological analysis lead to large questions as to why some types are more frequently attested than others or why some types are attested while others are (almost) unattested. For instance, why do the majority of the world’s languages prefer subject-initial order (e.g. , in Dryer’s (a) , language sample)2 while there are only a very small number of object-initial languages (e.g.  in Dryer a)? Why do the world’s languages avoid encoding the T and R arguments identically while treating both of them differently from the P argument? Asking and answering questions such as these is what happens in the fourth stage of typological analysis. At this final stage, linguistic typologists will make every attempt to make sense of what they have uncovered about human language. Linguistic typology tends to favour functional factors (e.g. cognition, perception, social interaction) over formal factors (e.g. constituent structure, formal constraints) when explaining the limits on cross-linguistic variation. For this reason, linguistic typology is sometimes characterized also as functional–typological, as distinct from formal–typological theories such as Principles and Parameters The five logically possible alignments are: Indirective (T = P 6¼ R), Secundative (T 6¼ P = R), Neutral (T = P = R), Tripartite (T 6¼ P 6¼ R), and Horizontal (T = R 6¼ P). 2 In Dryer’s (a) sample,  languages lack a dominant word order. Thus, the preponderance of subject-initial languages will be much stronger when these  languages are excluded from the comparison of the frequencies of subject-initial, verb-initial, and object-initial word orders. 1



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3.1 I N T R O D U C T I O N

Theory (e.g. Chomsky and Lasnik ). This, however, does not mean that linguistic typology eschews formal explanations altogether. Nothing is further from the truth. Indeed, as will be shown elsewhere in the present book, formal properties such as constituent structure are also appealed to in linguistic typology as (part of) its explanatory basis (e.g. Dryer ). However, linguistic typology does not tend to move in the direction of formal explanations without exhausting other avenues of explaining cross-linguistic variation in functional terms. Moreover, even when formal properties are utilized in formulating cross-linguistic generalizations, linguistic typology continues to seek functional motivations for such formally based generalizations. This is what generally makes linguistic typology different from formal approaches to language, e.g. Principles and Parameters Theory, Minimalist Program, or, to a lesser extent, Optimality Theory (see Chapter  for further discussion). In addition to functional and formal explanations, linguistic typologists also appeal to historical ones. For instance, as briefly discussed in Chapter , there are a small number of ‘exceptions’ to the universal preference for VO & NRel order, e.g. Chinese and a few languages spoken in China and Taiwan (Dryer , e), which have VO and RelN order. The explanation given for the existence of these ‘exceptions’ is that they have adopted RelN order through direct or indirect contact with languages spoken to the north of China, with OV & RelN order. This explanation is clearly historical, more precisely sociocultural. The use of these different types of explanation, functional, formal, and historical, will be further illustrated in many of the following chapters (also see Moravcsik ). So far, typological analysis has been described as moving from the first stage to the second and third to the fourth. However, the fact that the fourth or final stage has been reached and completed does not mean that the investigation has come to an end. Research, by definition, is an ongoing thing. Research never stops; it merely pauses for further data. As linguists document languages, making new data available to the research community, linguistic typologists may need to revise their original typological classifications—or even come up with different typological classifications altogether—in order to accommodate newly discovered structural types. This may lead further to the reformulation of original generalizations, which, needless to say, will also have a bearing on the content and/or nature of original explanations. Of course, original typological classifications may turn out to be robust 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

enough to deal with new data. Thus, linguistic typologists may go through the second, third, and fourth stages of typological analysis in a cyclic fashion. In other words, the second, third, and fourth stages may constitute a kind of loop. In fact, linguistic typologists may go through these three stages more than once even within a single investigation, when they initially begin with a small number of languages and then test their preliminary results against a larger number of languages, revising their generalizations and explanations along the way. 3.2 ‘Comparing apples with apples’: cross-linguistic comparability Linguistic typologists study cross-linguistic variation in order to understand the nature of human language. The best way to gain access to the cross-linguistic variation of a grammatical phenomenon is to study as wide a range of languages as possible (see Chapter  for language sampling). Because they study a large number of languages all at once, linguistic typologists must ensure that what they are comparing across languages be the same thing, not different things. It goes without saying that we do not want to compare ‘apples’ with ‘oranges’ when investigating the world’s languages in terms of one and the same property. Otherwise we will never be able to achieve what we set out to achieve: cross-linguistic variation of the same phenomenon. But how do we actually make sure that X in language A be compared with X, not Y, in language B? Put differently, how do we identify the same phenomenon across languages? This question is what Stassen (: ; ) refers to as the problem of cross-linguistic identification. There are two possible ways of dealing with the problem of crosslinguistic identification (Stassen : –; Croft : –; : –). First, we may choose to carry out cross-linguistic identification on the basis of formal or structural criteria. A set of formal properties, e.g. verbal marking, adpositions, may first be put together in order to identify a given phenomenon. Alternatively, we can opt for functional—i.e. semantic, pragmatic, and/or conceptual—definitions of the phenomenon to be studied. Which of the two types of definition—formal or functional—will meet the needs of typological analysis better? Croft (: ; : ) gives two reasons as to why formal definitions do not work for 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3. 2 ‘ C O M P A R I N G A P P L E S W I T H A P P L E S ’

cross-linguistic comparison. First, structural variation across languages is so great that it cannot serve as the basis of cross-linguistic identification (also see §.). As an example, Croft (: ) takes note of the fact that the subjects in English may be expressed by means of two different grammatical relations in languages such as Quiché (Mayan: Guatemala), Lakhota (Core Siouan; Siouan: US), and Spanish (Romance; Indo-European: Spain). Second, because of structural differences among languages, formal definitions have to be internal to the structural system of a single language, thereby again failing to serve as language-independent definitions (also see §.). No single formal definition may be able to capture all the differences that may exist between languages, their similarities notwithstanding. In point of fact, as Stassen (: ) points out, language-dependent formal definitions do not tie in with linguistic typology, one of the primary aims of which is to characterize structural variation within the world’s languages. Structural variation itself is what linguistic typologists want to study for cross-linguistic comparison in the first place. In other words, we cannot make use of the structural variation which has not yet been established in order to identify that structural variation. It will be tantamount to using a (non-existent or yet-to-be-written) description of X in order to describe X. Moreover, there can hardly be any purely formal definitions. Formal definitions of grammatical phenomenon X can only be identified and thus understood in the context of the function that X carries out. We cannot simply examine a given grammatical property and predict what function that grammatical property is used to perform. This would be possible only if functions were inherent in, and thus deducible from, grammatical properties themselves. Rather, functions arise out of what linguistic expressions are utilized for. For example, if we want to study comparative constructions across languages, we cannot infer the function of comparison from the linguistic expression in which that function is encoded (e.g. the use of adpositions, cases, lexical verbs). We will not know what grammatical properties to look for in the first place, thereby being unable to recognize a comparative construction when we see it. Thus, formal definitions are not deemed appropriate for the resolving of the problem of cross-linguistic identification. Functionally based definitions used in cross-linguistic comparison must be understood as broadly as possible. Thus, under the rubric of functional criteria, factors relating to discourse or to phonetics must also be considered for the purpose of invoking language-independent 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

definitions needed for typological analysis. For instance, in her typological study on person markers, Siewierska (: ) refers to the participant or discourse roles of speaker and addressee—and the third party, which assumes neither of these participant or discourse roles (see Chapter ). Moreover, language-independent definitions used in a phonological typology are likely to be based on articulatory–acoustic properties such as the place and manner of articulation, voicing, and such like (see Chapter ). To wit, the term ‘functional’ in functional criteria utilized in cross-linguistic comparison must be understood so broadly as to include factors external to the language system. For this reason, Croft (: ) prefers to refer to functional factors used in typological analysis as ‘external’ criteria. In view of the foregoing objections to formal definitions, linguistic typologists opt for functional definitions for purposes of crosslinguistic identification. However, functional definitions may not be without problems either. Far more frequently than not, functional definitions themselves tend to be based on pre-theoretic concepts or ill-defined notions. This is not to say, of course, that the problem is unique to this type of definition. The definition of a given concept is always dependent on the understanding of other concepts which make up that definition—unless these other concepts are undefined theoretical primitives. For example, consider the semantic definition of comparison utilized by Stassen (: ): a construction counts as a comparative construction (and will therefore be taken into account in the typology), if that construction has the semantic function of assigning a graded (i.e. non-identical) position on a predicative scale to two (possibly complex) objects.

In order to understand this definition fully, we need to have an understanding of what a predicative scale, a graded position, etc. are. Also note that the definition has nothing to say about what form or shape the construction in question will take. Thus, functional definitions are more of heuristics for cross-linguistic identification than definitions in the strict sense of the word. For this reason, it may not always be entirely clear how wide a range of grammatical phenomena may be ‘permitted’ to fall under a given functional definition. As an example, take Keenan and Comrie’s (: ) definition of relative clauses, which runs as follows: Our solution to [the problem of cross-linguistic identification] is to use an essentially semantically based definition of RC [relative clause]. We consider any syntactic object



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3. 2 ‘ C O M P A R I N G A P P L E S W I T H A P P L E S ’

to be an RC if it specifies a set of objects (perhaps a one-member set) in two steps: a larger set is specified, called the domain of relativization, and then restricted to some subset of which a certain sentence, the restricting sentence, is true. The domain of relativization is expressed in surface structure by the head NP, and the restricting sentence by the restricting clause, which may look more or less like a surface sentence depending on the language.

As Mallinson and Blake (: ) correctly point out, it is not the case that Keenan and Comrie’s definition of the RC ‘sets a lower limit on the degree to which the RC can resemble a simple sentence or full clause and still be an RC’. Whatever structure is seen to perform the relative clause function as described above will be taken to be an RC, no matter how little resemblance it may bear to the relative clause in wellknown languages, e.g. English. Note that the definition contains no distinct structural properties by which to identify RCs, other than the mention of the restricting clause and the head NP. Thus, we may not always be certain whether a given structure in a given language is a relative clause. It may well be nothing more or less than a ‘general’ structure which happens to be taken pragmatically or contextually as having a relative clause interpretation. Consider the following example of a so-called adjoined clause from Warlpiri (Hale ), which is susceptible to both relative clause and temporal interpretations as the English translation of () indicates. (The same structure can also have a conditional interpretation.) ()

Warlpiri (Western Pama-Nyungan; Pama-Nyungan: Australia) ŋatjulu-l̩u ø-n̩ a yankiri pantu-n̩u, I-ERG AUX-I emu spear-PST kutja-lpa ŋapa ŋa- n̩u COMP-AUX water drink-PST ‘I speared the emu which was/while it was drinking water.’

There is evidence that may call into question the grammatical status as a genuine relative clause of (). First, the adjoined clause as a whole can be positioned before the main clause as in (). ()

Warlpiri (Western Pama-Nyungan; Pama-Nyungan: Australia) yankiri-l̩i kutja-lpa ŋapa ŋa-n̩u, ŋatjulu-l̩u ø-n̩a pantu-n̩u ‘The emu which was drinking water, I speared it.’ or ‘While the emu was drinking the water, I speared it.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

Moreover, the adjoined clause need not have a NP co-referential with a NP in the main clause (in which case a relative clause interpretation is not possible), as in (). () Warlpiri (Western Pama-Nyungan; Pama-Nyungan: Australia) ŋatjulu-l̩u lpa- n̩a kal̩i tjan̩t u̩ - n̩u, I-ERG AUX-I boomerang trim-PST kutja-ø-npa ya-nu- n̩u njuntu COMP-AUX walk-PST-hither you ‘I was trimming a boomerang when you came up.’ Also note that the syntactic linkage of the adjoined clause with respect to the main clause is, as Hale (: ) puts it, ‘marginal [or very loose] rather than embedded’. In fact, how the adjoined clause is interpreted ‘is not determined by the grammar, but rather by a subset of the system of maxims, which are presumably observed in the construction of felicitous discourse, involving such notions as “relevance”, “informativeness”, and the like’ (Hale : ). Given these pieces of evidence, the question does arise as to whether the adjoined clause in () should be regarded as a relative clause although it may still qualify as a RC under Keenan and Comrie’s semantically based definition of RCs. This is exactly the same question that Comrie and Horie () raise as to the status as a relative clause of grammatical structures like the one in Warlpiri. They observe that in Japanese ‘relative clauses’ do not behave as they do in languages such as English. In English, relative clauses behave distinctly from other types of complement clause, whether with verbal or nominal heads.3 In Japanese, on the other hand, relative clauses are akin to complement clauses with nominal heads, distinct from complement clauses with verbal heads. In other words, there do not seem to be clear structural properties that help identify something as a relative clause or as a complement clause with a nominal head, with sentences potentially interpreted either as relative clauses or as complement clauses, ‘depending on such factors as the

3

Examples of a complement clause with a verbal head, and that with a nominal head are exemplified within square brackets in (i) and (ii), respectively (Comrie and Horie : –). (i) The teacher knows [that the student bought the book]. (ii) the declaration/ knowledge/fact [that the student bought the book].



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3. 2 ‘ C O M P A R I N G A P P L E S W I T H A P P L E S ’

semantics of the head noun (e.g. only certain head nouns allow the complement clause interpretation), and the plausibility of interpreting the head noun semantically as a constituent of the subordinate clause’ (Comrie and Horie : ). Comrie and Horie (: ) also point out that in Khmer the grammatical marker used in relative clauses is ‘not specifically a relative clause marker, but rather a general marker for associating subordinate clauses with head nouns’. They (: ) thus draw the conclusion from these observations that, there being no clear distinction between relative clauses and complement clauses of head nouns, the basic notion of relative clauses may not be of universal validity if it is meant by that notion that relative clauses are a distinct syntactic construction correlating highly with relative clause interpretations. In other words, they are suggesting that lacking relative clauses, languages such as Japanese and Khmer make use of a general syntactic construction for relating subordinate clauses to head nouns, which is in turn subject to a wide range of pragmatic, not semantic, interpretations including that of relative clauses. When confronted with a problem such as this, individual investigators may ultimately have to make up their own mind as to whether the structures in Warlpiri, Japanese, and Khmer should be taken to be relative clauses. However, such a decision should not be taken in an arbitrary or random manner. We must, in fact, take into account at least two criteria, one language-internal, and the other cross-linguistic, when making that kind of decision: (i) functional–structural consistency, and (ii) measure of recurrence or, more accurately, measure of cross-linguistic recurrence. Without supporting evidence from these two, it may hardly be justifiable to interpret the functional definition of relative clauses too broadly, that is to accept the adjoined clause in Warlpiri, or the complement/subordinate clause with a nominal head in Japanese or Khmer as a genuine relative clause. First, we must determine whether relative clause interpretations are mapped consistently onto the adjoined clause in languages like Warlpiri. Thus, if the adjoined clause is the strategy used consistently for the expressing of relative clause function, it must be regarded as a genuine example of relative clauses. If, on the other hand, the adjoined clause is associated only on an ad hoc basis with relative clause interpretations, its status as a relative clause will be questionable. Being one of the two structures employed consistently for the expressing of relative clause function in Warlpiri (Mary Laughren, personal communication), 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

the adjoined clause must be taken to be none other than the relative clause construction par excellence in that language.4 Even if the criterion of functional–structural consistency has been met, we cannot be too cautious about the status as a relative clause of the adjoined clause in Warlpiri, for instance. We should also be circumspect enough to take the structure to be an exemplar of the relative clause if and only if it recurs with relative clause function in other languages. This is the criterion of measure of recurrence. Of course, it cannot categorically be said in how many languages the structure in question should appear in order to be subsumed under the domain of relative clauses. But this much can be said: the more languages make use of the structure for the expressing of relative clause function, the stronger our confidence grows in accepting that structure as constituting one of the types of relative clause construction. The measure of recurrence may sound to some ears too ‘common sense’ to be legitimate in serious scientific investigation. This kind of measure of recurrence, however, is also adopted in other types of scientific investigation, albeit in much more rigorous form. For example, water is predicted to boil at  degrees Celsius at one atmosphere pressure (i.e.  torr or about . lb/sq in), and, in fact, we know that it does so precisely because of its recurrent physical behaviour of reaching the boiling point at that temperature. Similarly, if a given structure is used recurrently, and recurrently enough across languages to express relative clause function, it must be regarded as exemplifying one of the types of relative clause construction available in human language. While functional criteria are decidedly preferred to formal ones, far more often than not, formal properties do find their way into languageindependent definitions used for typological analysis so that they can, for instance, ‘serve the function of keeping the domain [of typological research] manageable’ (Stassen : ). In other words, formal properties are made use of in order to augment or fine-tune functionally based definitions. For instance, take Keenan and Comrie’s definition of relative clauses, cited above. Their functional definition of relative clauses includes references to ‘head NP’, ‘sentence’, and ‘clause’. Undoubtedly, 4

In fact, Warlpiri has one additional construction that is employed consistently for the expression of relative clause function, i.e. the nominalized non-finite clause (Mary Laughren, personal communication). Being subject to aspectual or temporal restrictions, however, this construction seems to be marked as opposed to the adjoined clause (Hale : ).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3.3 C O M P A R AT I V E C O N C E P T S V S DE S C R I P T I VE C AT E G O R I E S

‘head NP’, ‘sentence’, and ‘clause’ are properties internal to the language system (or, specifically, syntax); they are formal units or concepts. Restricting the scope of research to what ‘may look more or less like a surface sentence’, the formal property of ‘restricting clause’ places the structural combination of a head NP and a modifying phrase (e.g. ()) outside the purview of relative clauses while accepting the structural combination of a head NP and a modifying clause (e.g. ()), in spite of the fact that the function of the modifying phrase near the bookstore in (), and that of the modifying clause that brews the best coffee in town in () are one and the same: narrowing the reference of the head NP the café. ()

the café near the bookstore

()

the café that brews the best coffee in town

Similarly, Stassen (: ) appeals to a formal property of NP in order to restrict the two compared objects featured in his languageindependent definition of the comparative construction (cited above) to those expressed in the form of NPs. Thus, this additional formal criterion of NP-hood rules out comparative expressions such as (), (), and () as ineligible for inclusion in Stassen’s research, while accepting comparative expressions such as () and () (Stassen : ). ()

The tree is taller than the house.

()

I like Pamela better than Lucy.

()

The general is more cunning than brave.

()

The team plays better than last year.

() The president is smarter than you think.

3.3 Comparative concepts vs descriptive categories Related to the issue of cross-linguistic identification is the status or nature of grammatical categories utilized in typological analysis, e.g. verb, subject, passive, and the like. Linguists generally assume that grammatical categories referred to in language-independent definitions cannot only be regarded as cross-linguistically valid but can also be used for the description and analysis of individual languages (Haspelmath 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

: ). This hardly seems to be an unreasonable assumption, because cross-linguistic generalizations must be able to make sense of the grammatical categories in terms of which ‘the facts of [individual] languages’ (Haspelmath : ) are described and/or analysed. After all, cross-linguistic generalizations are formulated over the facts of individual languages. Not unexpectedly, grammatical categories used in language-independent definitions are generally taken to be instantiated in individual languages, and, in principle, to be universally available to all languages. For instance, when X is recognized as belonging to the verb category in a given language, X is assumed to be an instantiation of the universal verb category in that language and to share properties with what goes by the same category in other languages. Although there may be some differences across languages, the differences are outweighed by the similarities attested among the instantiations of the verb category in the world’s languages. To wit, there is assumed to be a strong connection between grammatical categories utilized for the purpose of describing and analysing individual languages, and grammatical categories invoked for the purpose of crosslinguistic comparison. Similarly, speakers must be able to construct mental categories not only to acquire language but also to be able to use language; otherwise, language acquisition would not be efficient, if not totally impossible. Such mental categories created by speakers for individual languages are also assumed to be equated across languages, to the extent that they may be thought to be universally available or applicable. Thus, speakers operate with similar mental categories, regardless of their linguistic backgrounds. Recently, however, this universalist assumption has been called into question by some linguistic typologists (e.g. Dryer ; Croft ; Haspelmath , ; Cristofaro ; but cf. Newmeyer ,  in support of universal grammatical categories, as described above). Their alternative view, most strongly expressed by Haspelmath (, ), is worth some space in the present chapter, as it relates directly to the problem of cross-linguistic identification, and to the way that typological, as opposed to language-particular, analysis should be conducted. Haspelmath claims that grammatical categories invoked in cross-linguistic comparison or what he calls comparative concepts have no status or reality in individual languages, because they are not grounded in the structural facts of individual languages. Put differently, comparative concepts are ‘concepts specifically designed for the purpose 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3.3 C O M P A R AT I V E C O N C E P T S V S DE S C R I P T I VE C AT E G O R I E S

of comparison that are independent of [grammatical categories used for describing and/or analysing individual languages]’ (Haspelmath : ). Such comparative concepts, in turn, consist of universal conceptual–semantic concepts (e.g. comparison, property, reference), general formal concepts (e.g. precede, identical, overt), and other primitive comparative concepts (e.g. word, phrase). Note that Haspelmath allows comparative concepts to make reference not only to conceptual– semantic (i.e. functional) concepts but also to formal concepts, as has been shown in §. to be generally the case in linguistic typology. Moreover, there is no single ‘standard’ list of comparative concepts (Haspelmath : ). This is because comparative concepts are created for the specific purpose of cross-linguistic comparison, with individual researchers choosing different ones, depending on their goals in typological analysis. Thus, it is possible that researchers may end up with different language-independent definitions to investigate one and the same phenomenon. In contrast, grammatical categories used in the description and analysis of languages are developed solely for the purpose of describing and analysing individual languages. Such language-particular grammatical categories, referred to by Haspelmath () as descriptive categories, are grounded firmly in the structural system of individual languages and should never be used for the purpose of cross-linguistic comparison. This is because the set of criteria used for identifying a category in one language is ‘only partially comparable [, if ever,] to the set of criteria that might be used to identify’ the same category in another language (Haspelmath : ). In other words, ‘[e]ach language has its own categories, and to describe a language, a linguist must create a set of DESCRIPTIVE CATEGORIES for it’ (emphasis original) (Haspelmath : ). This is, incidentally, similar to the position of the American structural linguistics in the first half of the twentieth century (e.g. Boas ): languages are best described in their own terms, not in terms of categories found in well-known languages (Haspelmath : ). To wit, descriptive categories cannot be regarded as equivalent or even comparable across languages, because they are not intended for capturing the similarities and differences between languages. In Haspelmath’s view, therefore, grammatical categories developed for describing or analysing one language are unsuitable for describing other languages, let alone for making cross-linguistic generalizations. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

Thus, descriptive categories and comparative concepts are two different kinds of theoretical concept.5 The former are designed to capture the structural properties of individual languages, including their language-particular idiosyncrasies, while the latter are invoked for the purpose of comparing the world’s languages with a view to ‘formulating readily testable crosslinguistic generalizations’ (Haspelmath : ). What this entails is that there is now ‘a radical disconnect between the grammatical [description and] analysis of individual languages and crosslinguistic (typological) generalizations about grammatical patternings’ (Newmeyer : ) to the effect that ‘[t]he analysis of particular languages and the [cross-linguistic] comparison of languages are . . . independent of each other as theoretical enterprises’ (Haspelmath : ). This radical position, however, may not be shared by all linguists, including linguistic typologists (e.g. Newmeyer : ). Before explaining Haspelmath’s rationale for the ‘radical disconnect’, it is worth illustrating how the distinction between comparative concepts and descriptive categories may work in typological analysis. Recall from the previous section that the Japanese construction that performs the function of relative clauses does not share structural properties with the English relative clause construction. In fact, the Japanese construction in question is a general construction made up of a nominal head and a complement clause, and it can be interpreted either as a relative clause (e.g. ()) or as something else (e.g. () or ()). ()

Japanese (isolate: Japan) gakusei ga katta hon [student NOM bought] book ‘the book that the student bought’

()

Japanese (isolate: Japan) gakusei ga hono o katta zizitu [student NOM book ACC bought] fact ‘the fact that the student bought the book’

5 Linguists tend to use the same labels for descriptive categories and comparative concepts. Haspelmath (: ) argues that these two types of label must refer to different kinds of entities. Thus, he (: ) suggests, following Comrie’s (a: ) proposal, that grammatical labels for language-particular descriptive categories be written with an initial capital (e.g. Relative Clause in English, as this category is unique to English), and lower-case be used for language-independent comparative concepts (e.g. relative clause, as this category applies to all languages).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3.3 C O M P A R AT I V E C O N C E P T S V S DE S C R I P T I VE C AT E G O R I E S

() Japanese (isolate: Japan) dareka ga doa o tataku oto [someone NOM door ACC knock] sound ‘the sound of someone knocking at the door’ There are no structural differences between (), (), and (); one and the same construction is used for three different functions. For this reason, the construction is referred to generically as the ‘noun-modifying construction’ in Japanese linguistics (e.g. Matsumoto ). Haspelmath (: ) offers his own language-independent definition of the relative clause, in lieu of Keenan and Comrie’s (), cited above: A relative clause is a clause that is used to narrow the reference of a referential phrase and in which the referent of the phrase plays a semantic role.

This language-independent definition consists of conceptual–semantic concepts such as ‘narrow the reference’ and ‘semantic role’ as well as the formal concept ‘clause’. Note, once again, that the formal concept of ‘clause’ is required in order to preclude non-clausal modifying expressions from the domain of relative clauses, i.e. modifying phrases (e.g. ()). The noun-modifying construction is a descriptive category created for the purpose of the description and analysis of Japanese in one particular respect: the structural combination of a nominal head and a complement clause. This descriptive category, however, has nothing to do with the language-independent definition of relative clauses. To put it differently, if the descriptive category of relative clauses in English were accepted as a universal grammatical category, the noun-modifying construction in Japanese might go unrecognized, although it carries out exactly the same function that English relative clauses do (as captured in Haspelmath’s language-independent definition above: narrowing the reference of a referential phrase). This is because the Japanese noun-modifying construction and the English relative clause construction share no defining structural properties— perhaps except for the fact that a nominal head and a clause abut on each other in both cases. In fact, ‘Japanese has no category that closely corresponds to the descriptive category of Relative Clause in English’ (Haspelmath : ). However, owing to the distinction between comparative concepts and descriptive categories, the nounmodifying construction in Japanese can be brought into the purview of the cross-linguistic study of relative clauses. From this and other similar 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

examples, Haspelmath draws the conclusion that comparative concepts and descriptive categories are—and should be—independent of each other, and that the latter cannot be equated universally across languages. There are two fundamental reasons, according to Haspelmath (, ), why the distinction must be strictly maintained between comparative concepts and descriptive categories. First, ‘crosslinguistic comparison cannot be category-based, but must be substance-based [at least in the main, because sometimes formal concepts are also needed, for instance, in order to narrow down the scope of research], because substance (unlike categories) is universal’ (Haspelmath : ). In other words, substance (= functional criteria, as broadly defined in §.) is invariable across languages or universally applicable, whereas categories (= formal criteria, as defined in §.) are highly variable or not universally applicable. For this reason, descriptive categories, designed for the description and analysis of individual languages, cannot be equated with comparative concepts, created for the specific purpose of cross-linguistic comparison. To put it differently, comparative concepts should be able to cut across descriptive categories—with language-particular characteristics that go with them—if they are going to be utilized in formulating testable generalizations about the world’s languages. Otherwise, cross-linguistic comparison will be unreliable. In contrast, descriptive categories cannot be exploited likewise, because they cannot be equated across languages. The second reason is related to the claim that grammatical categories used for individual languages are not comparable across languages. In fact, they vary so much in the range of functions that they perform that they cannot be equated across languages. Certainly, there are similarities among language-particular grammatical categories but there may be more differences than there are similarities, to the extent that they cannot be regarded as universally valid. For instance, the Russian dative, the Korean dative, and the Turkish dative have some properties in common, i.e. in terms of the semantic roles that they code, but ‘there are numerous differences between them and they cannot be simply equated with each other’ (Haspelmath : ). Moreover, while there is a core set of phenomena that languages have in common, there is also a large number of categories and constructions that are not readily identifiable across languages (Haspelmath : ). Moreover, as Haspelmath points out, each ‘new’ language (i.e. newly described or documented) has something that has never been documented or studied 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3.3 C O M P A R AT I V E C O N C E P T S V S DE S C R I P T I VE C AT E G O R I E S

before. Such phenomena will call for new descriptive categories, if not revision of existing ones.6 While Haspelmath (, ) makes a valid point about the different purposes of comparative concepts and descriptive categories, one pauses to wonder where comparative concepts—as opposed to descriptive categories, which are based on the facts of individual languages—come from in the first place. Are they created, in a deductive manner, without any reference to actual languages? Or are they based partly, if not wholly, on the researcher’s understanding of the individual languages that she already has exposure to, in particular her understanding of the properties in those languages that comparative concepts are created to capture. For instance, take relative clauses again. Haspelmath’s language-independent definition of relative clauses surely could not have come ex nihilo. It must have been derived from the researcher’s understanding of the function of relative clauses in the language(s) that he is familiar with (i.e. the function of narrowing the reference to a referential phrase). In other words, the understanding of what relative clauses do in those languages must have enabled the researcher to recognize and define the function of relative clauses. From there, he must have formulated the language-independent definition of relative clause by using the various comparative concepts and decided to survey the world’s languages in terms of that definition. It is very difficult to imagine that the researcher can come up with a language-independent definition of relative clauses without actually understanding or examining at least one language (probably his own 6 The disconnect between comparative concepts and descriptive categories, according to Haspelmath (: –, –; : –, ), has implications for controversies over category assignments. More often than not, linguists disagree on what X in a given language should be categorized as. For instance, should property words in Mandarin Chinese be regarded as adjectives or verbs? Is the ang-phrase in Tagalog a subject or a topic? To Haspelmath (: ), this kind of categorial controversy is ‘inevitable but unanswerable’, when grammatical categories are regarded as cross-linguistically valid, and universally available or instantiated in individual languages. For instance, linguists who raise this kind of question make the assumption—false in Haspelmath’s view—that the set of criteria used to identify subjects in language A should also be more or less applicable to subjects in language B, because grammatical categories attested in individual languages are instantiations of cross-linguistic grammatical categories. However, when the differences between languages are too great to ignore, the question starts to be raised as to whether what were initially thought to be subjects in B may not be subjects but something else, e.g. topics. Thus, Haspelmath (: ) does not think that it is ‘possible to resolve’ such categorial controversies although they probably give rise to ‘clarifications and new insights’ about individual languages concerned.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

first language). If this is the case, it is not easy to accept that comparative concepts and descriptive categories are as independent of each other as Haspelmath claims them to be. Indeed, while arguing for their mutual independence, Haspelmath (: ) points out that ‘the fact that two language-particular [descriptive] categories both match a comparative concept just means that they are similar in the relevant respect, but not that they are “the same” in any sense’. However, the existence of the relevant respect in which the two languages are similar indicates strongly that the two descriptive categories capturing the similarity have a connection with the matching comparative concept. This means that the two different types of category are comparable in that relevant respect, however limited their comparability may be. It is one thing to claim that comparative concepts are not descriptive categories or vice versa, but it is another thing altogether to argue that the two are independent of each other, that is, have no connection whatsoever between them. If there is some kind of connection, however remote, between comparative concepts and descriptive categories, will it be possible to think that comparative concepts are ‘of a more abstract category of language structure that abstracts away from language-particular idiosyncrasies’ (Haspelmath : )? Haspelmath does not think so. In fact, he goes on to claim that ‘[c]omparative concepts are motivated and defined in a way that is quite independent of linguistic categories (though of course not independent of the facts of languages)’, (emphasis added) (Haspelmath : ). However, if comparative concepts are designed to highlight ‘the relevant respect’ in which the two different language-particular descriptive categories are similar (while downplaying all other respects), isn’t that a kind of abstracting away from language-particular differences or idiosyncrasies (i.e. all non-relevant respects)? More to the point, because they are created to describe the facts of languages, descriptive categories are not independent of the facts of languages, just as comparative concepts are not, because they also deal with the facts of languages. If so, comparative concepts and descriptive categories are both deeply ingrained in the facts of languages. This seems to suggest, contrary to what Haspelmath claims, that they are indeed entities that can be related to, if not equated with, each other, their different purposes notwithstanding.7 Such a relatedness 7

To push this point further, imagine that there is only one undocumented language in the world that has property X. Linguists begin to work on this language and find out that it



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

3. 4 C O N C L U D I N G R E M A R K S

makes it difficult to lend support to the ‘radical disconnect’ between the two types of grammatical category.

3.4 Concluding remarks Typological analysis consists of four stages of investigation. Basically, typological analysis begins with identifying an object of inquiry. Once the object of inquiry is decided on, a language-independent definition is created in order to ascertain what types of structural strategy are employed by the world’s languages. This develops into a crosslinguistic generalization over the structural variation attested in the world’s languages. The last stage of typological analysis concerns the explanation of the cross-linguistic generalization, the basis of which can be functional, formal, and/or historical. Because a large number of languages are investigated in typological research, it is of vital importance to ensure that what is being compared across languages be one and the same thing. Typically, functional criteria are utilized to create language-independent definitions of the phenomena to be investigated. However, more frequently than not, formal criteria are also made use of, not least in order to narrow down the scope of investigation. Recently, it has been proposed that comparative concepts, used in languageindependent definitions, and descriptive categories, used in the description and/or analysis of individual languages, should be kept independent of each other. While it is indeed important to realize the different purposes of these two different types of grammatical categories in typological analysis (or in any linguistic analysis for that matter), it is equally important not to lose sight of the connection between the two, not least because both deal with the facts of languages. has a hitherto unknown property, namely X, without realizing that it does not exist in any other known languages. They create a new descriptive category to describe the property in question. Naturally, they want to ascertain whether the property is also found in other languages of the world. For the purpose of cross-linguistic comparison, they make use of comparative concepts, and create a language-independent definition of property X, instead of using the descriptive category, and set out to survey the world’s languages only to discover that no other known languages have the same property. In a case like this, the comparative concept may indeed be interchangeable with the descriptive category devised for the description of the property in the unique language, because the property in question exists in only one language. There are no other languages that have the phenomenon that matches the comparative concept. To wit, there is an undeniable connection between the two types of grammatical category.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

TYPOLOGICAL ANALYSIS

Study questions 1. Choose a handful of languages, e.g. the languages that you and your classmates speak or study, and discuss how comparison is expressed in those languages, using Stassen’s language-independent definition of the comparative construction (as cited in the main text, with the proviso that the two compared objects be expressed in the form of NPs). Focus on the similarities and differences among the comparative expressions that you have identified. 2. Haspelmath () points to the considerable number of differences between language-particular categories (e.g. the dative in Korean and Turkish) as evidence against descriptive categories being universally valid. With this in mind, compare the dative (case) in Korean and Turkish by identifying which meanings (e.g. recipient, goal) are coded by the dative in both of the languages, and which meanings are coded by the dative in only one of them. Useful references for this question are: Yeon, J. and Brown, L. (), Korean: A Comprehensive Grammar, London: Routledge; Göksel, A. and Kerslake, C. (), Turkish: A Comprehensive Grammar, London: Routledge. 3. The philosophical orientations of Rationalism and Romanticism in the study of languages were discussed in Chapter . The focus of Rationalism was the human mind’s natural or universal way of thinking, as reflected in human language. This suggests strongly that the human mind works the same way regardless of languages spoken. In contrast, the focus of Romanticism was on the diversity of human language, as individual languages were conceived of as manifestations of the spirit of the people who spoke them. Is there any similarity between the difference between the two philosophical orientations, and Haspelmath’s () distinction between comparative concepts and descriptive categories? If so, can you explain what that similarity may be? 4. Stassen (: ) contrasts the following two language-independent definitions of ‘indefinite pronouns’: (i) The domain of the enquiry is the semantic notion of indefiniteness, and the actual enquiry is restricted to pronouns. (ii) The domain of the enquiry is the encoding of pronouns, and the actual enquiry is limited to those pronouns which exhibit the semantic feature of indefiniteness. These definitions each consist of one semantic criterion (i.e. indefiniteness) and one formal criterion (i.e. pronouns). Which of the two criteria is used to narrow down the scope of investigation in (i) and (ii)? Do you anticipate any difference between the two definitions in terms of the range of crosslinguistic data to be collected?



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

FURTHER READING

Further reading Cristofaro, S. (). ‘Grammatical Categories and Relations: Universality vs. Language-Specificity and Construction-Specificity’, Language and Linguistics Compass : –. Haspelmath, M. (). ‘Comparative Concepts and Descriptive Categories in Crosslinguistic Studies’, Language : –. Moravcsik, E. (). ‘Explaining Language Universals’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Newmeyer, F. (). ‘On Comparative Concepts and Descriptive Categories: A Reply to Haspelmath’, Language : –. Stassen, L. (). ‘The Problem of Cross-Linguistic Identification’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

4 Linguistic typology and other theoretical approaches

4.1 Introduction



4.2 Conceptual differences between LT and GG



4.3 Methodological differences between LT and GG



4.4 Optimality Theory: a derivative of GG with a twist



4.5 Concluding remarks



4.1 Introduction There are two major theoretical traditions in linguistics: functional and formal. The distinction between these two boils down to what kinds of explanation are proposed in the study of human language. For explanations, functional linguistics typically appeals to factors outside the language system, e.g. cognition, perception, communication, discourse, sociocultural relationships. These external factors are thought to have a crucial bearing on the way language works. Not unexpectedly, explanations in functional linguistics are typically functional in nature. For instance, X co-occurs with Y because their co-occurrence facilitates language processing and production. Formal linguistics, in contrast, avoids external factors altogether (but see §. on Optimality Theory), attempting to account for the nature of human language by referring to the language system itself, e.g. phrase–structural configuration, grammatical categories. Not surprisingly, explanations in formal linguistics are formal in character. For instance, X co-occurs with Y if X occupies a certain phrase–structural position vis-à-vis Y. These theoretical traditions

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

4.1 I N T R O D U C T I O N

are well reflected in two different approaches to language universals (in the broadest sense of the term; see §.). Linguistic typology (or the Greenbergian approach to language universals), as shown in §., identifies itself with the functional tradition, whereas generative grammar (or the Chomskyan approach to language universals) is regarded as the epitome of the formal tradition. As explained in Chapter , linguistic typologists study their objects of inquiry, make empirical observations about them, formulate generalizations on the basis of the empirical observations, and attempt to make sense of the generalizations. In other words, linguistic typologists approach the study of human language without any preconceptions or assumptions about what human language should (not) be like. Thus, the manner in which linguistic typologists carry out research is characterized as inductive. This inductive method is nicely described by Joseph Greenberg, who reportedly made the following comment during a lecture in his typology and universals class (Croft, Denning, and Kemmer : xvi): [Y]ou have to muck around in grammars. You shouldn’t read a grammar with a predetermined goal in mind. Just look around until something interesting pops out at you.

Here, Greenberg is urging his students to study languages without any preconceived ideas (‘muck around in grammars’, ‘without a predetermined goal in mind’, ‘look around’) until they begin to notice interesting patterns, regularities, correlations, etc. (‘something interesting pops out at you’). Put differently, the goal of linguistic typology (hereafter LT) is not about proving what one (intuitively) believes about human language but about finding out what human language is like (and then explaining why it is the way it is). This entails that, in LT, theory tends to arise out of data or while data are being investigated. This inductive approach to the study of human language contrasts strikingly with the approach adopted in generative grammar, specifically Chomskyan generative grammar (hereafter GG).1 GG is distinctly a deductive approach to the study of human language. It makes specific (a priori) assumptions about human language, the most important being that children are biologically endowed with a set of principles for constructing the grammars of the languages that they acquire. (Note that this 1 There are other GG theories that do not share some of the fundamental assumptions with Chomskyan GG, e.g. Lexical Functional Grammar, Head-Driven Phrase Structure Grammar. In this chapter, these derivative theories will be ignored for the sake of broad comparison.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY AND OTHER THEORETICAL APPROACHES

is only an assumption because it cannot be confirmed or disconfirmed empirically.) This innate property of the human mind is referred to in GG as Universal Grammar (UG). The ‘existence’ of UG is based on the reasoning that children must come biologically equipped with UG because they acquire whatever languages they are exposed to—regardless of their biological parents’ linguistic background. Moreover, the linguistic input that they receive in the process of first-language acquisition is so impoverished (e.g. sentences that children never hear, including ungrammatical ones, typically used in GG analysis) that it is not possible for children to move from such impoverished data to the linguistic competence of the mature native speaker in a matter of four or five years. This insufficient amount of linguistic input for children, referred to as the ‘poverty of stimulus’, should not be a conundrum if it is assumed that humans are biologically wired with UG. This is the conceptual baseline from which GG proceeds in order to develop a theory of language; linguistically specific innate knowledge, in the form of biologically transmitted UG, is the cornerstone of theory-building in GG. This concept of UG, in turn, determines what the theory of language (i.e. the components of UG) should look like. While its validity has never been empirically confirmed— and is unlikely to be any time soon, in view of the present state of our knowledge and technology—this fundamental assumption about UG is sacrosanct in GG, not to be abandoned or modified in any way whatsoever, although the components of UG have evolved over the decades. The remainder of this chapter will discuss some of the major differences between LT and GG in their approaches to language universals or, more generally, to the study of human language. Moreover, a brief comparison of Optimality Theory (OT) with LT and GG will be provided, as OT, albeit derived conceptually from GG, does not only pay a great deal of attention to cross-linguistic variation but also makes use of external factors when proposing explanations for its claims about human language. Equally importantly, unlike other GG theories, OT is decidedly an output-oriented theory and thus more sensitive to surface properties of human language. In this sense, there is a degree of affinity between LT and OT.

4.2 Conceptual differences between LT and GG Because of its deductive approach to the study of human language, GG is very focused on theory construction, more so than on discovering the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

4. 2 C O N C E P T U A L D I F F E R E N C E S B E T W E E N L T A N D G G

structural diversity in the world’s languages. In GG, the theory of language as a whole is of paramount importance, to be worked out, at least in outline, even before any time and effort are to be spent on other areas of investigation. First and foremost, the theory of language structure must be conceptualized in such a way that it can explain how children acquire their first languages on the basis of impoverished linguistic data in a short period of time, and also how children work their way through the considerable diversity of the world’s languages because they do not know in advance which languages they will acquire. Thus, in Principles and Parameters Theory (hereafter P&P), the most recent GG theory, what is universal is to be separated from what is language specific. The former is captured in invariant, inviolate universal principles, some or many of which are unspecified in terms of parametric values or options. These ‘open’ parameters, designed to capture what is language specific, need to be set or fixed on the basis of linguistic data that children are exposed to. The analogy commonly used in P&P is a switch box that controls the flow of electricity and which can be set in one or the other direction; when the switches are set one way, the language acquired is Swahili, and when the switches are set another way, the language acquired is Japanese. This theory of language is claimed to be able to explain how children acquire their first languages on the basis of impoverished linguistic input in such a short period of time (by means of universal principles), and also how children zero in on the languages spoken around them, regardless of their biological parents’ linguistic background (by means of parametric setting). The way theory construction is driven deductively (at least in its initial and most important phase) is further illustrated by some of the latest theoretical developments within GG, brought about by the Minimalist Program (MP). MP does not represent a theory as such but a conceptual framework within which to evaluate different theories for their worth. For instance, GG theories such as P&P operate with multiple levels of representation where linguistic objects are analysed for different theoretical purposes. MP raises the question as to whether or not X—be it a theoretical assumption or a device of descriptive technology—is indispensable in the light of what we intuitively believe to be essential properties of human language. For instance, since sentences are pairings of form and meaning, as the MP argument goes, there should be only two levels of representation, one for form and the other for meaning, instead of four, as utilized in P&P. Thus, the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY AND OTHER THEORETICAL APPROACHES

decision to allow only two levels of representation is made on the basis of deductive reasoning about the nature of human language. It is not difficult to realize what theoretical implications this kind of deductive reasoning will have for P&P or other GG theories for that matter. In comparison, LT does not operate with such an articulated, deductively driven theory of language. First and foremost, LT is concerned with discovering the structural diversity in the world’s languages. Thus, LT focuses on examining a wide range of languages in order to identify the cross-linguistic variation on each linguistic phenomenon. In fact, it does not aim to produce a theory of language as a whole. If a theory of language is a goal, however remote it may be, it will have to wait until after the structural diversity in the world’s languages has been fully documented, analysed, and explained. This fundamental difference between LT and GG stems from the postulation of UG in GG. LT is non-committal to the existence of UG, although like any other linguistic theory it accepts the view that humans have a special innate ability to acquire languages. This ability is one of the things that distinguish humans from other animals, after all. However, LT takes the position that the innate ability or knowledge is not necessarily linguistic in nature but it is more likely to be (part of) the general cognitive knowledge that humans put to use during their language acquisition. Within GG, the principles of UG must be kept invariant and inviolate, because they need to be applicable to whatever languages children end up acquiring. The universal principles, in turn, tend to be highly abstract—understandably so because they must be able to accommodate each and every one of just over , languages spoken on this planet. (This figure of ,-odd languages represents a very large amount of variation for the principles to be able to handle.) For instance, the most widely accepted GG theory of word order (or linear order, as it is widely known in GG) claims that the basic word order at the clause level in all human languages is, at some deep level, Subject– Verb–Object (SVO). This claim is based on theory-internal arguments. In X-bar Theory, the standard GG theory of phrase structure, every phrase has three constitutive elements, namely Head, Complement, and Specifier, and there are theory-internal arguments for these elements to be hierarchically structured in such a way that they turn out to be linearized as none other than Spec–H–Compl. This corresponds to Subject–Verb–Object order (i.e. Spec = Subject, H = Verb, and Compl = Object). This SVO order is abstract enough—i.e. not actually 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

4. 2 C O N C E P T U A L D I F F E R E N C E S B E T W E E N L T A N D G G

manifested on the surface—to apply to all human languages. From the universal SVO order are derived all six surface word orders, as attested in the world’s languages, namely SOV, SVO, VSO, VOS, OVS, and OSV. The derivations required to produce each of these six surface word orders, including SVO, from the universal SVO order are carried out by means of syntactic operations of movement. These operations, in turn, should take place in compliance with other principles. Similarly, all human languages are claimed to have Preposition–Noun order, in compliance with the same phrase–structural configuration of Spec– H–Compl (i.e. Preposition = H, Noun = Compl). From this universal order is derived the opposite (surface) order of Noun–Postposition—by means of movement (or, specifically, a leftward movement of Compl). This kind of research contrasts sharply with that in LT in which all six surface word orders (or both Preposition–Noun and Noun-Postposition order) are recognized as what they are, that is, word orders as attested in the world’s languages (read: actually pronounced). The difference between LT and GG is clear enough. In LT, there is no room for abstract entities such as the universal SVO or Preposition– Noun word order, however attractive they may be, because they are not something that can be tested or confirmed empirically and because their existence can only be supported by theory-internal arguments. Theory-internal arguments lose their strength outside the theory that they have been produced for. In GG, UG must be the same for all human languages because of its theoretical assumption that humans are biologically endowed with the same UG. This assumption, deductively arrived at, dictates that superficial differences among the world’s languages should be just that, superficial differences. Hidden behind these superficial differences must lie something universal, shared by the world’s languages. Thus, the primary task of GG is to discover the universal principles of human language, bypassing the superficial differences among languages. In the case of word order, as has been shown, this insistence on UG gives rise to the postulation of the universal SVO word order. Whether a language has SVO or OSV on the surface depends on the setting of relevant parameters, whatever they may be: when the switches are set one way, the word order acquired is SOV, and when the switches are set another way, the word order acquired is OSV. LT is concerned with cross-linguistic variation with a view to discovering constraints on that variation, while GG insists on cross-linguistic invariance, relegating the structural 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY AND OTHER THEORETICAL APPROACHES

diversity of the world’s languages to superficial differences. In view of this, it does not come as a total surprise that in GG no full account has ever been proposed as to how the six surface word orders can actually be derived from the universal SVO order.

4.3 Methodological differences between LT and GG Not unexpectedly, the conceptual differences between LT and GG have methodological ramifications. The most significant is that LT surveys a wide range of languages in order to identify cross-linguistic variation. Cross-linguistic variation cannot be captured on the basis of the data from a small number of languages, let alone a single language. Thus, it is not uncommon for linguistic typologists to work on several hundred languages at a given time. There has been LT research carried out on the basis of a much smaller number of languages, but the general expectation in LT is that a reasonably large number of languages should be taken into account. For instance, the World Atlas of Language Structures (Haspelmath et al. ; also available at http://wals.info), an international collaborative project involving over  linguists, operated with a sample of  languages or a sample of  languages.2 The range of  to  languages probably represents the minimum sample size needed for typological investigation. The nature of human language cannot be investigated without examining a large number of languages. Examples of this have already been given elsewhere. For instance, it is not possible to discover the preponderance of subject-initial word order or the correlation between verb-initial order and prepositions in the world’s languages by examining a handful of languages. In contrast, GG has no theoretical requirement that a wide range of languages be examined in order to study the nature of human language. The rationale for this position could not be simpler. Recall that in GG, it is assumed that children come biologically endowed with UG and that they acquire the languages to which they have access, regardless of their biological parents’ linguistic background. Indeed, Noam Chomsky, the founder of GG, avers: The contributors were required to use the -language sample and encouraged to use the -language sample. 2



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

4.3 M E T H O DO LO G I C A L DI F F E R E N C E S B E T W E E N L T A N D G G

I have not hesitated to propose a general principle of linguistic structure on the basis of observation of a single language. The inference is legitimate, on the assumption that humans are not specifically adapted to learn one rather than another human language. (: )

Under this assumption, then, it does not matter which language one chooses to investigate, because UG remains invariant and inviolate across languages. Thus, all it takes to study the nature of human language is any one language—typically English, as it happens to be the native language of many leading generative grammarians. In fact, in so far as UG research is concerned, one language is as good as the next one. This explains why GG generally does not pay a great deal of attention to cross-linguistic variation. Not surprisingly, language-sampling methodology is not something that has been raised or discussed within GG. Indeed, language sampling is simply a non-issue in GG. Things have changed somewhat since the advent of P&P, which takes into account cross-linguistic variation by means of parameter setting. Nevertheless, it is true that GG does not study a wide range, or a large number, of languages. Even within P&P, the number of languages investigated at a given time tends to be very small (i.e. a few), and languages being investigated tend to be genealogically related (e.g. Germanic or Romance languages). There are a handful of generative grammarians who study a relatively large number of languages but they are the exception rather than the norm. This methodological approach to the study of human language has a crucial bearing on how language universals research is carried out, in practical terms, in GG, as opposed to in LT. Once a universal principle is established on the basis of a single language, e.g. English or Italian, it is tested against other languages, albeit not many. If the principle does not stand up against further data, what happens next is to propose additional codicils or caveats so as to address cross-linguistic variation. These codicils may appear in the form of auxiliary principles or parameters. This process, described by the American linguistic typologist William Croft as the ‘one-language-at-a-time’ method, is the same even when more than one language is investigated at the same time. Thus, one of the languages is chosen and studied, and then compared with a second language, a third language, and so on, while the original principle proposed on the basis of the first language is augmented by additional principles or parameters. For instance, in English (an SVO language) the wh-expression what appears in the sentence-initial 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY AND OTHER THEORETICAL APPROACHES

position (i.e. displaced from its ‘original’ (object) position, as in What did Tim buy?, as opposed to Tim bought a book). On the basis of this and other similar observations, a universal principle is proposed to the effect that ‘syntactic operators’ such as wh-phrases must be located in an appropriate Spec position. This universal principle triggers the wh-movement, which shifts the wh-expression from its original to the sentence-initial position, which is the appropriate Spec position. In Mandarin Chinese (another SVO language), however, the wh-expression shenme ‘what’ occurs in situ (i.e. in the object position), as in: () Mandarin Chinese (Sinitic; Sino-Tibetan: China) Xiaolin mai-le shenme? Xiaolin buy-ASP what ‘What did Xiaolin buy?’ This difference notwithstanding, P&P claims that the wh-expression moves into the sentence-initial position in both English and Mandarin Chinese—in fact, in all other languages for that matter. The difference, however, is claimed to be caused by the ‘timing’ of the wh-movement. That is, the wh-expression shenme in Mandarin Chinese moves covertly (i.e. at one abstract level of representation so that shenme can appear in the object position, i.e. immediately after the verb, on the surface), while the wh-expression what in English moves overtly (i.e. prior to a different abstract level of representation so that it can be pronounced in the sentence-initial position). Note that the universal principle in question applies to both languages. (Thus, universal principles are invariant and inviolate.) The difference in timing is interpreted as a parametric variation (or simply a parameter), bearing upon the universal principle: Mandarin Chinese has a parametric value of ‘move wh-expressions covertly’ and English ‘move wh-expressions overtly’. Other principles have also been claimed to have similar parametric variation. While differences between languages are handled in such a way that universal principles are kept intact, it is only within the context of P&P that the existence of operations like a covert or overt wh-movement can be supported. In other words, they are not something falsifiable; they can be neither confirmed nor disconfirmed empirically. (How can one actually prove that in Mandarin Chinese the wh-expression moves into the sentence-initial position at some abstract level?) 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

4.4 O P TI M A L I T Y T H E O R Y : A D E R I VA T I V E O F G G W I T H A T W I S T

In contrast, LT, right from the start, examines a wide range of languages, typically set up according to statistically informed sampling methods (see Chapter  for language sampling). One of the largest language samples ever used in LT is found in Dryer’s (a) research on the ordering of Subject, Object, and Verb; its sample contains as many as , languages (or nearly % of the world’s languages). While the actual size may vary from one study to another, samples should be representative of the world’s languages. With a sample set up, the linguistic typologist compares languages in one fell swoop and formulates testable cross-linguistic generalizations based on the data from the sample as a whole, e.g. Subject appearing before Object and Verb in the majority of the world’s languages. In other words, LT practises the ‘multilateral comparison’ method, as opposed to the ‘onelanguage-at-a-time’ method used in GG. This methodological difference means that cross-linguistic generalizations in LT are built on the data collected from language samples, representative of the world’s languages, while universal principles in GG are based on a single language studied at a time, and then augmented in the manner described earlier as further languages are brought to bear on them. It may seem that both of the methodological approaches will lead to the same outcome in the end. This, however, is more apparent than real, because in GG universal principles are assumed to be able to be discovered on the basis of one language, with deviations from them amounting to nothing more than superficial differences, to be handled by means of parameters, whereas in LT superficial differences (i.e. cross-linguistic variation) constitute the very material for discovering patterns, regularities, and correlations. To wit, LT emphasizes cross-linguistic variation with an eye to discovering language universals or universal preferences, while GG insists on cross-linguistic invariance at the expense of cross-linguistic variation.

4.4 Optimality Theory: a derivative of GG with a twist OT is not a substantive theory of language in the sense that the nature and content of constraints (as principles are typically called in OT), and representations of linguistic structure utilized in OT are largely independent of the claims that OT makes about language. In other words, OT freely imports constraints and representations from other theories, 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY AND OTHER THEORETICAL APPROACHES

including P&P and LT among others. What OT does provide is the basic architecture of grammar and the formal means or mechanism whereby these ‘imported’ constraints and representations are handled in its fundamental modes of description, analysis, and explanation. OT emerged in the early s as an alternative to rule-based generative phonology and has subsequently extended itself to other areas of linguistics, including syntax.3 OT purports to be a theory of human language capacity or linguistic competence, as opposed to performance. This is hardly surprising, because OT has inherited certain theoretical underpinnings from its intellectual progenitor, GG. However, OT also represents a radical conceptual shift from GG in a number of ways. First, in OT, while universally present in the grammars of all human languages, all constraints are violable, not inviolate, as in GG. Thus, unlike in GG, where no principles lose out to any other principles, in OT some constraints win over other constraints. Moreover, constraints are ranked differently in different languages; which constraint loses out to which depends on individual languages. Thus, constraint ranking is language particular, not universal. What this entails is that OT is amenable to cross-linguistic variation, as different languages make different structural choices by ranking constraints differently. Second, OT is distinctly an output-oriented theory. This means that constraints apply at the level of the output (read: the surface form). While OT, like GG, operates with abstract linguistic structure (or input as it is called in OT), it does not derive surface structure from abstract structure. Instead it is an input–output mapping theory. That is to say that potential outputs are generated on the basis of the input and then evaluated in terms of relevant constraints to the effect that the optimal output is selected. The optimal output may vary from language to language, depending on how constraints are ranked in individual languages. Third, OT is more sensitive to the structural diversity of the world’s languages than P&P is. This does not come as a total surprise because the output is the locus of constraints in OT. By definition, the output refers to the surface form, be it phonological or syntactic. Not unexpectedly, OT readily takes into account cross-linguistic data available

3

In rule-based phonology, inputs, with the right structural conditions, undergo a structural change into required outputs; structural conditions and structural changes are linked by means of rules. Moreover, outputs, as new inputs, may undergo further changes into new outputs, and so on.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

4.4 O P TI M A L I T Y T H E O R Y : A D E R I VA T I V E O F G G W I T H A T W I S T

from LT, which focuses on surface, not abstract, structure. Fourth, OT’s output orientation predisposes its practitioners to look for explanations inside as well as outside the language system. The output or the surface form, as produced in speech, is bound to be susceptible to various performance-related or contextual factors or variables. Thus, it is not uncommon for OT to make use of language-external or functional principles in explaining why X is selected as the optimal output for a given input. Notwithstanding its sensitivity to cross-linguistic variation, OT is a different theory from LT. Like GG, it does not refrain at all from postulating abstract input structure, remote from the surface form. For instance, in one OT analysis, SVO is proposed as the abstract input, onto which the surface word order (i.e. SOV, SVO, VSO, VOS, OVS, or OSV) can be mapped, in compliance with language-particular constraint ranking. The justification for this universal SVO order is simple enough: it is imported directly from the GG analysis discussed in §.. The difference between the OT analysis and the GG analysis is, however, that in OT, the SVO input is mapped onto the optimal output instead of the latter being derived from the former, as in GG. Yet in another OT analysis of word order, as many as three abstract input word orders, i.e. SOV, SVO, and VOS, are proposed. The reason why these, not the other three word orders, i.e. VSO, OSV, and OVS, are chosen as the inputs is that VSO and OSV cannot be directly generated (or base-generated in GG parlance) without involving any movement operations—why this is so is theory-internal—and that OVS can never be optimal because it loses out to SOV. Thus, OT is similar to GG in that it operates with the abstract word order(s), distinct from the surface word orders. Much attention is paid to explaining how the mapping between the abstract word order(s) and the surface word orders is to be carried out. One thing worth evaluating briefly before ending this section is the OT claim that OVS can never be optimal. This, as the particular OT argument goes, explains the rarity of OVS languages in the world. But surely, OVS must be optimal, under a language-specific constraint ranking, in OVS languages because of some other (yet-to-be-identified but to-be highly ranked) constraints. Apart from the issue of what impact, if any, these constraints may have on overall word order variation, it is a conceptual leap of faith to extrapolate cross-linguistic rarity (or predominance for that matter) from language-specific constraint rankings. This is because constraint 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY AND OTHER THEORETICAL APPROACHES

rankings in OT are not universal, but strictly language particular, even though constraints themselves are universal. In OT, there is no universal constraint ranking as such. In a genuine universal constraint ranking, cross-linguistic rarities can be predicted, precisely because the components that make up that ranking are universally hierarchized (e.g. see Chapter )—this is why there is only one ranking, not multiple (language-specific) rankings. This way, predictions can be made to the effect that what is at the top of the hierarchy is more common(ly attested) in the world’s languages than what is at the bottom of the hierarchy. Constraint rankings in OT are not at all like universal rankings or hierarchies because they are, by design, language particular.

4.5 Concluding remarks LT is an inductive approach to the study of human language. GG, in contrast, makes certain fundamental assumptions about the nature of human language, particularly about biologically transmitted linguistic knowledge, making itself a deductive approach to the study of human language. This difference has a wide range of implications, not least for how LT and GG address both theoretical and methodological issues. For instance, while LT insists on the importance of examining a large number of languages, typically collected in a statistically informed manner, GG claims that the nature of human language can, in principle, be uncovered on the basis of a single language. Because of its insistence on UG, GG, particularly P&P, proposes abstract universal structure while dealing with cross-linguistic differences by means of parameter setting. In LT, in contrast, such abstract structure is not proposed as part of analysis and explanation. Moreover, GG is not open to performance-based or, generally, functional explanations because of its focus on linguistic competence. LT, in contrast, tends to exhaust all possible functional explanations before exploring language-internal explanations. OT may be said to lie somewhere between LT and GG in the sense that it is much more sensitive to the structural diversity of the world’s languages because of its output orientation. Moreover, the output orientation, in turn, predisposes OT to functionally based explanations. Nonetheless OT, derived conceptually from GG, operates with abstract (input) structure, remote from surface structure, which is the primary focus of LT data collection. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

STUDY QUESTIONS

Study questions 1. The most celebrated case in support of the poverty-of-stimulus argument involves sentences, as in: ()

English (Germanic; Indo-European: UK) a. b. c. d. e.

The dog in the corner is hungry. Is the dog in the corner hungry? The dog that is in the corner is hungry. Is the dog that is in the corner hungry? *Is the dog that in the corner is hungry?

The poverty-of-stimulus argument, based on the sentences in (), proceeds like this. The yes–no question rule in English involves the initial positioning of the main auxiliary verb is. In (a), there is only one auxiliary verb, which must be fronted, as in (b). In (c), there are two, one main and the other non-main; it is not the first auxiliary, as in (e)—as may be predicted on the basis of sentences such as (a) and (b)—but the second that is fronted, as in (d), because the second instance of is in (c) is the main auxiliary verb. However, knowing which auxiliary verb to front is claimed to be something that cannot be learned from inadequate evidence, i.e. sentences such as (a) and (b). It requires a deep understanding of constituent structure: in (c), the first instance of is is not the main auxiliary verb, but the second instance is. In other words, the crucial evidence for formulating the yes–no question rule comes from sentences such as (c) and (d), not to mention the ungrammatical sentence in (e). However, such sentences are claimed to occur infrequently—or never in the case of (e)—in natural speech, as one may expect of the impoverished linguistic input that the child is exposed to. It has actually been claimed in GG that speakers may go through much or all of their lives without ever having been exposed to crucial evidence such as (c) and (d), let alone (e). Thus, the ability to use the yes–no question rule to produce (d), instead of (e), cannot be learned from experience but it must come from somewhere else, that is, biologically transmitted UG. However, critics of the poverty-of-stimulus argument point out that sentences such as (c) and (d), if not the ungrammatical one in (e), are attested much more frequently than proponents of the argument have claimed, e.g. Would anyone who is interested see me later?, Will the owner of the bicycle that is chained to the gate please move it? Find some of the encyclopaedias or schoolbooks written for young children and try to see how frequently you can find sentences which contain more than one auxiliary verb, as in (c), (d), and the other similar sentences given above. 2. The GG theory of word order, discussed in §., postulates Spec–H–Compl (e.g. SVO at the clausal level) as the universal word order. In point of fact, it admits of Compl–H–Spec as well. This alternative linear order corresponds to OVS at the clausal level. However, the theory (e.g. Kayne ) decides



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LINGUISTIC TYPOLOGY AND OTHER THEORETICAL APPROACHES

on Spec–H–Compl, not Compl–H–Spec, as the sole universal word order, because cross-linguistic evidence (i.e. from LT) points to the prevalence of Spec–H–Compl (e.g. SVO), not Compl–H–Spec (e.g. OVS). What implications do you think that this decision will have for the deductively driven GG theory of word order or, more generally, for GG as a theoretical approach to the study of human language? 3. OT is based on the theoretical assumption that constraints are universal (i.e. present in all human languages), and they are ranked differently in individual languages. These universal constraints compete with each other across languages in such a way that some potential outputs are ruled out completely. This is so because they can never be optimal—that is, they are, in OT parlance, harmonically bounded (read: defeated out of contention) by optimal outputs. Do you think that it is the right thing to rule out harmonically bounded outputs like this, when only –% of the world’s languages are adequately documented? Further reading Croft, W. (). ‘Methods for Finding Language Universals in Syntax’, in S. Scalise, E. Magni, and A. Bisetto (eds.), Universals of Language Today. Dordrecht: Springer, –. Haspelmath, M. (). ‘Parametric versus Functional Explanations of Syntactic Universals’, in T. Biberauer (ed.), The Limits of Syntactic Variation. Amsterdam: John Benjamins, –. Pullum, G. K. and Scholz, B. C. (). ‘Empirical Assessment of Stimulus Poverty Arguments’, Linguistic Review : –. Siewierska, A. (). ‘Linguistic Typology: Where Functionalism and Formalism Almost Meet’, in A. Duszak and U. Okulska (eds.), Bridges and Barriers in Metalinguistic Discourse. Frankfurt am Main: Peter Lang, –. Zwart, J.-W. (). ‘Relevance of Typology to Minimalist Inquiry’, Lingua : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5 Language samples and sampling methods

5.1 Introduction



5.2 Some recalcitrant issues in language sampling



5.3 Types of language sample



5.4 Biases in language sampling and how to avoid them



5.5 Independence of cases



5.6 Sampling procedures



5.7 Testing independence of cases at a non-genealogical level



5.8 Typological distribution over time: was it different then from what it is now?



5.9 Concluding remarks



5.1 Introduction In view of its emphasis on the structural variation in the world’s languages, it comes as no surprise that linguistic typology deals with languages in large numbers. Intuitively speaking, the best way to discover the structural variation and limits thereon is perhaps to examine all languages spoken—or signed, as the case may be—in the world. For obvious reasons, it is not difficult to see why that is out of the question. There are just over , languages in the world (Lewis et al. ). Individual linguistic typologists (or even a team of linguistic typologists) are unable to compare such a large number of languages. Economy—that is, time and money—alone will rule out such large-scale

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

surveys as unfeasible. What makes it even more unrealistic is the fact that there are far more languages which await linguistic documentation than those which have been described. Moreover, many languages are so inadequately or poorly documented that linguistic typologists may not be able to find anything about what they are interested in. Indeed, it was pointed out in §. that less than % of the world’s languages may have decent descriptions (read: adequate for typological research). In point of fact, it is plainly impossible to study all languages of the world, because many languages have already died out, with some of them leaving little or no record, although at least their existence may be known to us (e.g. Arin, Assan, Kassitic, Illyrian). There may also be many other extinct languages that we know nothing about. (The reader will recall from §. the claim that over half a million languages have ever been spoken on this planet, if humans began talking , years ago and languages evolved at a rate of one per  years.) Furthermore, with dialects evolving into separate languages over time, there will also be ‘new’ languages coming into being. If our aim is to study all languages of the world, there must certainly also be room for these languages in our investigation. But needless to say, it is not possible to take ‘future’ languages into account prior to their emergence. In view of these practical limitations of time, money, descriptions, and existence, linguistic typologists often work with a practically manageable set of languages or what is referred to as a language sample. Naturally, questions arise as to how many languages should be included in a sample, and how languages—or which languages—should be selected for a sample. The better part of the present chapter will be concerned with these and other related questions. Linguistic typology has recently witnessed a great deal of activity in developing sampling methods or, more generally, quantitative procedures. This is hardly surprising, because linguistic typologists tend to make claims about frequencies, correlations, and such like on the basis of a large number of languages. Indeed, the status of sampling methods has achieved prominence as one of the ways of evaluating the quality of typological research, to the extent that all other things being equal, typological work based on a statistically based language sample is more highly valued than typological work based on languages selected because of availability and convenience. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 2 S O M E R E C A L C I T R A N T I S S U E S I N L A N G U A G E S A M P L I N G

5.2 Some recalcitrant issues in language sampling Linguistic typologists have over the decades addressed various issues relating to language sampling but it seems that some of them remain recalcitrant and will probably never be resolved. This is not because linguistic typologists are not good at what they do but because these issues defy scientific solutions. Two such issues need some attention before language sampling is discussed properly: (i) the concept of language; and (ii) genealogical relatedness and language contact in the remote past. First, it is generally taken for granted that languages are individual entities that can be named and distinguished from each other. For instance, when it is said that there are , languages in the world (Lewis et al. ), we expect the total number of the world’s languages to be exactly ,.1 This is based on the assumption that languages can be counted as discrete entities, just as people in a room can be. However, the concept of language is not as well defined as this may suggest. That is to say that the distinction between language and dialect is not clear-cut. According to Ruhlen’s () count, there are said to be , languages in the world. Although Lewis et al. () include  signed languages and Ruhlen () does not, the presence or absence of the signed languages alone does not account for the big discrepancy between the two counts. This discrepancy is not something to be surprised about, because the distinction between language and dialect is a notoriously difficult one to draw. For instance, one of the most widely used criteria in defining language vs dialect is mutual intelligibility. But the notion of mutual intelligibility itself is a difficult one to define, let alone measure, adding to the difficulty of defining the concept of language. How mutually intelligible do two varieties have 1 According to the sixteenth edition of Ethnologue (Lewis ), there are said to be , languages in the world. Thus, the total number of languages has actually increased by  in the seventeenth edition of Ethnologue (Lewis et al. ). Paul Lewis (p.c.), one of the editors of the latter edition, explains that the increase is due largely to the introduction of a new category, i.e. ‘dormant languages’. These languages have ‘no remaining speakers but [they] are still closely associated with a living ethnic identity’. The editors’ reason for creating this category is that many communities with dormant languages are making efforts to revive their languages, and they should thus be distinguished from extinct languages. There are also a few languages that have been newly identified and added to the catalogue.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

to be in order to qualify as dialects of a single language? Some might say that they must share at least % of words, grammar, and other linguistic properties to qualify as dialects of a single language. Less than % would classify them as different languages. (Note that % is given here for the sake of argument; to the best of the author’s knowledge, no one has suggested any specific cut-off percentage.) Others might operate with different thresholds (e.g. %). Even if all agree on what constitutes the minimum level of mutual intelligibility required to qualify dialects of a single language, how do we actually measure mutual intelligibility? For instance, not all linguists may apply the criterion of mutual intelligibility to the same extent. Does mutual intelligibility apply to the lexicon alone or also extend to other parts of the language? There may also be additional criteria that some linguists may use but others may not in determining whether two or more varieties are separate languages or dialects of a single language.2 Moreover, non-linguistic factors (e.g. political, cultural, religious), far more frequently than not, play a crucial role in regarding varieties as separate languages or as dialects of a single language. For instance, prior to the beginning of the s, Serbian, Croatian, Bosnian, and Montenegrin were treated as a single language called Serbo-Croatian. The disintegration of Serbo-Croatian into the four different ‘languages’ was politically motivated in the wake of the Yugoslav Wars in the s. Hindi and Urdu are one and the same language to all intents and purposes. Nonetheless, these two varieties are widely regarded as separate languages, because they are spoken in two different sovereign nations—Hindi in India and Urdu in Pakistan—with different writing systems and religious orientations. Thus, some linguists treat Hindi and Urdu as varieties of a single language by using a double-barrelled language name, i.e. Hindi–Urdu (cf. Hindustani), while other linguists regard them as two separate languages. What this nebulous nature of the language concept may imply is that when languages are studied for statistical purposes, we cannot be completely sure whether they are all comparable objects, since not all linguists use the same set of criteria for what constitutes a language, as opposed to a dialect. Thus, when languages are collected for sampling purposes, there is a certain amount of uncertainty about their status as comparable units of analysis. For instance, Lewis (: ) mentions as an additional criterion a common literature or an ethnolinguistic identity with a central variety that other varieties understand. 2



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 2 S O M E R E C A L C I T R A N T I S S U E S I N L A N G U A G E S A M P L I N G

Second, as explained in §., inherited or contact-mediated similarities must be separated from language universals or universal preferences. Otherwise, shared properties will be recognized incorrectly as something inherently preferred in human language when many of the exemplifying languages happen to belong to a very small number of large language families or when the properties tend to diffuse easily across genealogical boundaries through migration and/or borrowing. As will be discussed in §., linguistic typologists have been developing different sampling procedures with this important point in mind. However, it needs to be realized that it is not possible to eliminate the effects of genealogical relatedness or language contact completely from language samples (but see §.). The best we can do is to minimize these non-linguistic effects as far as possible when creating a language sample. The reason for this is not too difficult to fathom. While genealogical relatedness in the not too distant past may be able to be established or relatively recent language contact may be well known or have been even documented, genealogical relatedness or language contact in the remote past is well beyond our reach. The Comparative Method, typically used to establish genealogical relatedness among the world’s languages, is said to take us only ,–, years back into the past (e.g. Fox , Campbell  for the Comparative Method). Although we do not know when exactly modern human languages started to be used—that is, when human languages reached the current level of development (e.g. sometime between , and , years ago, according to the genetic scientist Luigi Luca Cavalli-Sforza (: )), going ,–, years back into the past won’t tell us much about the world’s languages in the better part of human history. Written records about contact among languages cannot take us even that far back into the past, as writing has an extremely short history in the grand scheme of things. Thus, language contact in the remote past is not possible to detect either. Language families may now seem to be genealogically independent of each other but there is no way to prove that they did not derive historically from a common ancestor or they were not in contact with each other in the remote past. We can in principle prove genealogical relatedness or contact, provided that we have found evidence in support, but we can never disprove genealogical relatedness or contact. Put differently, absence of evidence is not evidence of absence. Remote genealogical relatedness and contact will be discussed further in §. and §.. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

The foregoing issues may seem insurmountable, but that does not mean that typological—or other linguistic—research cannot be carried out until they have been resolved. True, language is not a clearly defined concept, and the boundary between language and dialect may not be drawn to everyone’s satisfaction. There is probably little that can be done to rectify this situation, however. The best we can do is to use one catalogue of the world’s languages (e.g. Lewis et al.  or Ruhlen ) consistently. The expectation here is that different catalogues of the world’s languages employ their own sets of criteria for distinguishing between languages and dialects as consistently as possible. While genealogical relatedness or contact in the remote past is impossible to disprove, proposals have been put forth in order to understand their possible effects on the formulation of cross-linguistic generalizations and even to alleviate them in sampling procedures. These proposals may never be able to deal with the issue of remote genealogical relatedness or contact satisfactorily. It must also be borne in mind that the languages of the world today themselves are a kind of sample of all languages ever spoken (or signed) on this planet. In other words, many more languages may have disappeared into oblivion than there are now in the world. Most of these extinct languages having left no record, it is impossible to include them in typological research. Now, what if languages regarded as genealogically independent of each other had descended, in the remote past, from an unknown, extinct language? In a situation like this, it is impossible to isolate genealogical relatedness. Similarly, languages that shared properties through contact in the remote past may have all disappeared, although their descendants, now living in geographically non-adjacent areas, have retained some of these properties but not to the extent that they can be readily recognized as constituting a linguistic area (or Sprachbund in German). These are hypothetical scenarios, but they demonstrate why it is impossible, in the present state of our knowledge, to eliminate remote genealogical relatedness or contact completely from sampling processes. Dryer (: ) points out that ‘[e]ven if the shared characteristic is a common retention, the two languages can still be considered independent with respect to the characteristics, since after sufficient time, it will be largely a matter of chance that they still share the characteristic’ (emphasis added). But when an instance of genealogical relatedness is claimed to have been isolated, how do we know whether that instance is remote enough to be ignored along the lines of Dryer’s suggestion? 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 3 T Y P E S O F L A NG U A G E S A M P L E

In other words, the question that begs to be asked is: how long is ‘sufficient time’? The moral lesson of Dryer’s suggestion seems to be that we should not worry too much about remote genealogical relatedness or contact. More to the point, not all of the world’s languages have been documented; only % of the world’s languages have decent descriptions. It seems to be unrealistic to speak of remote genealogical relatedness or contact when so much more remains to be discovered about the world’s languages as spoken (or signed) today. The reason is that what we understand about remote genealogical relatedness or contact is likely to change drastically once we have found out (more) about the remaining % of the world’s languages. In view of this state of affairs, it is, for want of a better word, premature—one dares to say, even counterproductive— to be overly concerned about the effects of remote genealogical relatedness or contact. Rather, it is more important to formulate cross-linguistic generalizations on the basis of as many of the world’s languages as possible, while trying to minimize the effects of remote genealogical relatedness or contact as far as possible (see §. and §.). No less important in this context is, of course, the urgent need to document as many languages as possible before they disappear into oblivion (e.g. Nettle and Romaine ; Evans ).

5.3 Types of language sample There are two major types of language sample, to be chosen depending on what it is that we wish to find: variety sample and probability sample. When the goal of our research is to identify all structural types used to express a given meaning or function in the world’s languages, we may not understand or know in advance much about the phenomenon under study. For this type of explorative research, what is referred to as a variety sample is most suitable. The basic requirement of a variety sample is that it should be representative of the world’s languages, representative in the sense of genealogical diversity among other things. How to create such a representative sample will be the topic of §.. As our research progresses, we may continue to increase the size of the sample for further investigation, more or less until when no additional types can be found. In principle, this kind of explorative research does not cease until all languages spoken in the world have been looked at—although that will not be possible for 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

the reasons given earlier, e.g. descriptions, existence. For instance, if we wish to ascertain what structural strategies are utilized for the expression of comparison, we may start with a modestly sized variety sample (e.g. – languages) but must be ready and willing to include more and more languages in the sample, as circumstances permit, because we do not have any prior idea of what or how many types of the comparative construction may exist in the world’s languages. In other words, we cannot claim that there are only such and such structural types used for the expression of a given meaning or function by surveying a small fraction of the world’s languages. For this reason, Bakker (: ) points out that variety samples ‘can never be too large’ for purposes of explorative research. To survey as many languages as possible is very important because different structural types may also have their own subtypes. For instance, Stassen’s () study of comparative constructions identifies at least three different subtypes of the so-called fixed-case adverbial comparative construction type (in which the standard NP appears as the object of an adverbial phrase with a fixed case).3 Stassen’s sample contains  languages. This means that further research may require that more languages be examined, not least in order to see whether there may be other subtypes of the fixed-case adverbial comparative type, or even whether there may be comparative construction types other than the ones already recognized in Stassen’s work. (It is quite possible that no additional types or subtypes will be found, but the point is that we will not know that unless many more languages have been looked at.) Moreover, experience shows that rare types tend to be discovered when languages hitherto unknown to the research community are taken into account. One of the most celebrated examples of this kind may be the ‘discovery’ of object-initial languages, i.e. Object–Verb–Subject and Object–Subject–Verb. It had been widely believed that there were no object-initial languages in the world until the late s when object-initial languages (spoken mainly in the Amazon) began to be brought to the attention of the wider linguistic community (i.e. Derbyshire ; Derbyshire and Pullum ; Pullum ). The need to increase the size of a variety sample may entail that once the research has reached a reasonable level of confidence about the The standard NP is the NP ‘which indicates the object that serves as a yardstick for the comparison’ (Stassen : ), e.g. Carolyn in Laura is taller than Carolyn. 3



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 3 T Y P E S O F L A NG U A G E S A M P L E

structural variation that it aims to capture (i.e. well past the initial phases of investigation), the task may become more of surveying additional languages than of increasing the size of the sample. By this it is meant that in the later phase of the research, we may decide to study more and more languages as their descriptions become available (e.g. in published form) instead of continuing to enlarge the variety sample—or replicating the study by using larger and larger samples. What this also entails is that in the later stage of the research, we do not need to be restricted too much by strict sampling procedures in terms of genealogical relatedness, linguistic areas, or other variables (e.g. Himmelmann : –). After all, the goal of the type of research which a variety sample is created for in the first place is to characterize the structural variation in the world’s languages as fully as possible. The closer we get to achieving this ultimate goal, the less relevant to the research the need for a sample becomes. In other words, what is called for is more data, not more samples. This may also mean that languages the descriptions of which were already available but were not included in the sample(s) also need to be brought into the fold of the research, because while languages in a given language family are fairly similar typologically (e.g. Dryer : ), they may also use radically different structural types, highlighting the possibility that some hitherto unknown structural types may have escaped the net in the process of language sampling. In contrast, if we want to ascertain whether there is a correlation between two or more structural properties, e.g. VO or OV order, and the use of prepositions or postpositions, how many languages are to be selected for a language sample may not be so much an issue as which languages are to be selected for a language sample. In order to quantify the likelihood of a language being one of the four (logical) types, i.e. VO & Preposition, VO & Postposition, OV & Preposition, and OV & Postposition, we need to have a prior understanding of VO vs OV and Preposition vs Postposition. This understanding will exclude languages without a dominant word order (e.g. Chukchi (Northern Chukotko-Kamchatkan; Chukotko-Kamchatkan: Russia), Kharia (Munda; Austro-Asiatic: India)), or without prepositions or postpositions (e.g. Itelmen (Southern ChukotkoKamchatkna; Chukotko-Kamchatkan: Russia), Mien (Hmong-Mien: China)) from the scope of the research (and also from the language sample to be created). The language sample suitable for this kind of research is called a probability sample, as the primary purpose of the research is to 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

determine the probability of a language being of a specific type, e.g. there is an % likelihood of a VO language having prepositions (e.g. Dryer ). It is not difficult to see why it is important, when creating a probability sample, to be mindful—far more mindful than in the case of a variety sample—of the effects of genealogical relatedness and language contact. Languages in a given language family may have inherited VO and Preposition from their ancestor language (i.e. genealogical inheritance), or languages may have acquired VO and Preposition from other languages through contact (i.e. areal diffusion). For example, Dryer (: ) points out that about % of the SVO languages in the world, as reported in Tomlin (), are from one large language family, i.e. Niger-Congo. Thus, if it were not for this single large language family, ‘the number of SVO [or VO] languages in the world would have been considerably lower, in fact not much more than half the number of SOV [or OV] languages’. We would not want to include too many languages from the Niger-Congo family when investigating the correlation between VO vs OV and Preposition vs Postposition, because of the fact that Niger-Congo languages have VO order through genealogical inheritance. Similarly, Dryer (: ) demonstrates that in Nichols’s () sample, about two-thirds of the languages from North America employ head-marking (i.e. grammatical marking on heads, not on dependents) whereas only two out of thirty-six languages outside North America have head-marking. Dryer (: ) also points out that ‘the thirteen verb-initial languages in [Nichols’s] sample include four instances of pairs of languages from the same family’. He goes on to argue that the correlation between head-marking and verb-initial word order, as proposed by Nichols (), may be nothing more than ‘an artifact of the areal skewing of the languages in Nichols’[s] sample’. Thus, the researcher should ascertain whether there is a linguistic preference for VO & Preposition—or whether the combination of these two properties is frequently attested in the world’s languages—either because it happens to be found in large language families or because it happens to have a strong tendency for areal diffusion. When languages are drawn from large language families or linguistic areas, special care must be taken to minimize the effects of genealogical relatedness and language contact so that genealogically inherited and contact-mediated properties will not be confused with linguistic preferences. Thus, in a probability sample which languages, as opposed to how many languages, should be selected for sampling purposes is of great importance. How to create a 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 3 T Y P E S O F L A NG U A G E S A M P L E

sample while minimizing the effects of genealogical inheritance and areal diffusion is the topic of §. and §.. Cross-linguistic generalizations such as the correlation between VO and Preposition formulated on the basis of probability samples come with the expectation that they can be extended to the total population of human languages. For instance, based on the quantified probability of a language being of VO & Preposition (e.g. Dryer ) we will be in a position to say that a VO language has an % likelihood of having prepositions. As the empirical base of the research expands, we may test this correlation against further data (i.e. more languages). It is, however, not just a matter of examining additional languages by using larger samples, because we should continue to be mindful of the effects of genealogical relatedness and language contact when bringing further data to bear on the proposed correlation. In other words, special care must be taken to minimize these effects through the whole duration of the research. In this respect, a probability sample differs from a variety sample. In a variety sample also, languages may be chosen in such a way to minimize the effects of genealogical relatedness and language contact. But the main point of this minimization is to produce a sample that is manageable and at the same time representative of the world’s languages. As already explained, the kind of research which a variety sample is typically used for does not need to be mindful of the effects of genealogical relatedness and language contact once it has entered the later phase of investigation. Genealogical relatedness or language contact has no real impact on the way the ultimate goal of this kind of research is achieved, because that goal is to identify all structural types used for a given meaning or function in the world’s languages. While the variety sample and the probability sample are two main types of language sample, we may also operate with what is known as a convenience or opportunity sample to carry out a preliminary investigation of what we want to study. This type of sample, as the names suggest, is a sample consisting of languages that we happen to have ready access to. For instance, we may wish to study how comparison is encoded cross-linguistically. Since a (variety) sample is costly to create, we may first choose to work with a small number of languages, the data from which are readily available, for instance, through our own university libraries or personal contacts (e.g. native speakers, language specialists). Once the preliminary research is completed, we may have a decent amount of data on the basis of which to determine whether the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

research topic is worthy of further investigation, e.g. by means of a variety or probability sample. Using a convenience sample in the early phase of investigation may not be a bad idea, as it is very cost-efficient, that is, making use of what is readily available and nothing else. It may turn out that on the basis of initial findings, we may decide that the topic is not typologically interesting enough to warrant a large-scale investigation. To come to this kind of decision after having undertaken a large-scale investigation may indeed be ‘wasteful’, because we may have to go beyond readily available data in order to create a proper sample. Thus, it may not be injudicious to begin with a convenience sample to avoid such a disappointment. Not surprisingly, in fact, many of the samples so far used in linguistic typology are probably more convenience than variety or probability samples. For instance, Greenberg’s celebrated seminal work on basic word order typology made use of a convenience sample of thirty languages. His work, as is well known, was subsequently replicated and expanded by other linguistic typologists on the basis of larger and better samples (e.g. Hawkins ; Tomlin ; Dryer ). This also illustrates that one researcher may initiate a particular line of research with a convenience sample and develop it into a fecund research area, which other researchers may subsequently continue with, producing more reliable results on the basis of better samples.

5.4 Biases in language sampling and how to avoid them When creating a language sample, whether a variety or probability sample, we need to be mindful of a number of biases to be avoided or at least minimized. Two such biases have already been alluded to in the previous section: genealogical bias and areal bias. First, as has already been explained, it is of utmost importance not to confuse linguistic preferences and genealogically inherited properties. Languages within a genealogical group are bound to share a large number of properties because they descended from a single ancestor language. Thus, if we want to determine which of the six (logical) word order types at the clausal level is the most preferred in the world’s languages, inclusion of too many languages from large language families will have an adverse effect on what we wish to find out. It is generally believed that SOV and SVO are the two most frequent word order types in the world’s 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 4 B I A S E S I N L A N G U A G E S A M P L I N G A N D H O W T O A V O I D T H E M

languages, but about % of the SVO languages belong to one large language family, namely Niger-Congo, which has , member languages (or .% of the world’s languages) (Lewis et al. ). Without these Niger-Congo languages, the number of SVO languages would drop to less than half the number of SOV languages. Thus, the claim, made on the basis of a sample ‘flooded’ by Niger-Congo languages, that SVO and SOV are the two most frequent word order types in the world’s languages (e.g. Tomlin ) should be called into question, because it is likely to be based on the misinterpretation of SVO in Niger-Congo languages, i.e. genealogical inheritance, as something inherently preferred in human language (see §.). Of course, the question arises as to how many is too many, when it is said that a sample should avoid including too many languages from large language families. This topic will be discussed in §.. Similarly, a language sample will suffer from an areal bias if it contains too many languages from the same linguistic area (or Sprachbund), well-known instances of which include the Balkans, Meso-America, and South East Asia. For example, there are a number of properties that the languages of MesoAmerica have in common but that are absent from languages outside the area, e.g. nominal possession, use of relational nouns, vigesimal numeral systems, non-verb-final basic word order, several widespread semantic calques (Campbell et al. ). Languages from the same linguistic area tend to share properties by way of mutual influence through contact. For this reason, contact-mediated properties should not be confused with linguistic preferences either. For example, the proposed correlation between head-marking and verb-initial order, mentioned in the previous section, highlights the importance of this point. While it remains to be seen, based on better samples, whether the proposed correlation is a genuine one, it is correct to say that the original study (i.e. Nichols ) relied on a language sample that contained too many exemplifying languages from one and the same area, namely North America. If head-marking and verb-initial word order are indeed areal features of North America, the strength of the proposed correlation between these two properties will be drastically vitiated. Thus, properties that languages share due to their common genealogical heritage or contact are what may be called ‘chance’ or accidental structural properties of language families or linguistic areas, respectively (Comrie : ); they must be distinguished from those properties which represent linguistic preferences and should not be 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

interpreted as what is preferred in human language. Special care must, therefore, be taken to ensure that particular language families or genealogical groups not be overrepresented (or underrepresented) in language samples but that languages be selected equitably from all known language families or genealogical groups. In addition to genealogical and areal biases, there are three additional types of bias: bibliographical, cultural, and typological. These biases will in turn be discussed briefly. Linguistic typologists often find themselves in an unenviable situation wherein they are forced to select languages for a sample, depending mainly on whether grammatical descriptions are available. This is indeed an unfortunate situation but sometimes cannot be avoided. For instance, Indo-European languages are very well documented in both breadth and depth, whereas the coverage of the languages of New Guinea and South America is very meagre. Even if we are willing to incorporate a representative number of languages from New Guinea or South America in our sample, we may be unable to have a reasonable amount of access to them simply because there are not enough (detailed) grammars of languages from these regions available in the first place. This is something that cannot easily be remedied and will continue to create a certain amount of distortion or tension in language samples ‘even where the existence of the skewing and of its disadvantages are recognized’ (Comrie : ). Moreover, we may work in a place where already published grammars are unfortunately not readily accessible; the libraries that we rely on for our research may hold mainly Indo-European or Oceanic languages, and not much else, for instance. This is known as bibliographic bias. If this kind of bias is unavoidable and present in a sample, the least that we can do is to state openly the existence of the problem for the benefit of other researchers. Genealogically related languages or languages in close contact also tend to share cultural characteristics. If it is assumed, as some argue, that a language is an important part of a culture, it can also be assumed that culture may have a bearing on language (e.g. Trudgill  for a recent such view). What this entails for language sampling is that it may also be advisable to avoid including too many languages from one and the same culture (e.g. Perkins  for such a view). Special attention to cultural bias may indeed be necessary, especially if we want to investigate structural properties that are likely to be related to cultural beliefs and practices. For instance, it is not unreasonable to assume that the presence of speech levels—e.g. different verbal endings selected, 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 4 B I A S E S I N L A N G U A G E S A M P L I N G A N D H O W T O A V O I D T H E M

depending on who speaks to whom, as in Korean, which has six different speech levels—may be related to the social hierarchy of the speech community. That is, more hierarchical speech communities may have more need for different speech levels than less hierarchical speech communities. Thus, if we wish to do typological research on speech levels, we may need to control for cultural bias when creating a sample, although it may not be an easy task to come up with a typology of cultures (but cf. Murdock ). One thing to bear in mind in relation to cultural bias is that in general, linguistic properties are not borrowed as easily as cultural ones.4 Perhaps it may be judicious not to equate linguistic borrowing with cultural borrowing. Moreover, there is a certain amount of overlapping between genealogical, areal, and cultural bias. Genealogically related languages tend to be spoken in one and the same area, and genealogically related languages or languages in contact tend to share cultural characteristics. The last type of bias to be discussed in this section, typological bias, can be best illustrated by a possible correlation between two properties, p and q. (More than two properties may be involved.) If a language sample contains a disproportionate number of languages with one of the properties, then the research outcome may be questionable. For example, Nichols () and Siewierska () arrive at very different conclusions about the relationship between case alignment type (i.e. grammatical marking of core arguments; see Chapter ) and word order type (e.g. OV vs VO, or Verb-initial vs Verb-medial/ final). Nichols does not discern any correlation between case alignment type and word order type, whereas Siewierska demonstrates a significant correlation between them. In Nichols’s sample, % of the dominant case alignments are based on those associated with verbs, whereas in Siewierska’s sample only % of the dominant case alignments are related to verbs, that is, the majority of the dominant case alignments related to nouns and pronouns. In other words, the discrepancy between Nichols () and Siewierska () concerning the possible correlation between case alignment type and word order type may boil down to a typological bias, that is, whether the languages in the sample were typologically skewed in favour of verbs. 4 For instance, see Driver and Chaney () for discussion of the Yurok, Karok, and Hupa tribes in California with their languages belonging to different stocks, yet their cultures being almost identical (Dryer : –).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

While the five types of bias all demand careful attention in language sampling, genealogical and areal biases seem to be the ones which researchers are concerned primarily with. This is probably the case first because genealogically related languages tend to be found in geographically adjacent areas and share similar cultural traits, and second because the further we go back into the past, the more difficult it becomes to distinguish genealogical relatedness and areal diffusion. Thus, genealogical and areal biases pose theoretical problems or issues for language sampling. In contrast, bibliographical bias is more a practical issue than anything else, typological bias is probably related directly to what is being investigated, and cultural bias may not be entirely independent of genealogical and/or areal bias.

5.5 Independence of cases The reason why genealogical, areal, and other biases need to be avoided or at least minimized when creating a language sample is that selected languages must be independent of each other or must not be different instances of the same case. To put it simply, a sample should not include more of the same thing to the exclusion of other things. Indeed, what would be the point of operating with a sample when many of the languages included in that sample belong to one and the same language family? In such a genealogically biased sample, there is bound to be a skewing in favour of genealogical inheritance over linguistic preference. This kind of independence among sampled languages is, in fact, a fundamental requirement in all statistical procedures, and it is commonly referred to in the literature as the independence of cases. However, for the reasons already explained, e.g. remote genealogical relatedness or areal diffusion (see §.), it is not likely that in language samples the independence of cases can be maintained to the fullest extent. This is the case regardless of whether we use a large or small sample.5 For instance, in a sample of  languages, there is no absolute guarantee that none of the languages chosen are remotely related to one another. There may be deeper genealogical relatedness among the world’s languages than is generally believed. The more languages Thus, Rijkhoff and Bakker (: ) conclude: ‘[e]ssentially, . . . there does not seem to be a real solution.’ 5



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 5 I N D E P E N D E N C E O F C A S E S

included in a sample, the higher the possibility of including such remotely genealogically related languages. This is not difficult to understand. Take a mundane analogy. Imagine two rooms. One room is filled with  people, and the other room only contains ten people. It is more likely to find people related to each other in the former than latter room. In other words, genealogical distance in the room with  people is likely to be less than that in the room with only ten people. The independence of cases in language sampling is not just confined to genealogical relatedness. Indeed, it must also apply to linguistic areas since it proves very difficult to exclude from large samples languages that may in one way or another have been in contact or may come from the same linguistic areas, especially when these areas are not generally recognized in the literature. It is impossible, therefore, to extricate completely from large samples variables or factors that are not independent of genealogical affiliation or geographical location. In fact, Dryer (: ) goes so far as to suggest that it may not be possible to construct a sample of many more than ten languages if one decides to be strict about the independence of cases in language sampling. Needless to say, a sample of ten languages is unlikely to produce any significant generalizations about the nature of human language. Should we go instead for a small language sample so as to maximize genealogical distance between sampled languages? This may sound like a good idea, because the fewer languages there are in a sample, the lower the probability of their remote genealogical relatedness will be. Unfortunately, small language samples are not without problems either. In a small sample, small language families may not be even represented.6 What this means is that not all independent cases may have a chance of being included in a sample. This can be a conceptual problem because the independence of cases certainly cannot be upheld when some independent cases are included in, but others are excluded from, the final sample. It will not be clear from that sample what the sampled languages are independent of. Even worse is the situation wherein small language families excluded from the sampling happen to possess rare structural types, because ‘exceptional types test the rule’ (Perkins : ). 6 As will be seen in §.., one-third of the world’s language phyla (see n.  for the definition of phylum) are singletons or consist of isolates (Ruhlen ; Rijkhoff et al. : ).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

When the choice between a large and a small sample does not seem to alleviate the problem associated with the independence of cases, it does not come as a surprise that efforts have been made to reach a compromise between maximizing genealogical or areal distance among sampled languages (as in a small sample) and ensuring the representation of all language families in a sample (as in a large sample). Indeed, as will be shown in the next section, striking a balance between these two desiderata is probably the most important objective of various language sampling proposals put forth over the decades.

5.6 Sampling procedures By definition, a sample is a scaled-down version of a particular domain. The domain of objects that is the target of inquiry is referred to technically as the universe. In linguistic typology (or generally in linguistics), the universe is the world’s languages (or more specifically all human languages in the past, present, and future), and languages are drawn from this universe for the purpose of creating a sample. The basic thinking in linguistic typology on language samples is that they must be accurately representative of the world’s languages. How do we ensure that a language sample is an accurate representation of the world’s languages? To that end, we must first need to impose some kind of classification system on the world’s languages so that they can be put into non-overlapping categories. Otherwise, languages would not be able to be differentiated from each other, and one language would be as good as another for sampling purposes; in a situation like this, for instance, we would not be able to tell whether we are selecting more of the same or more of the different. The process of placing languages into non-overlapping categories (i.e. strata) is technically known as stratification. 5.6.1 Proportional representation in language sampling The basis of stratification may be genealogical, areal, cultural, or whatever. If the chosen stratification system is genealogical (which is predominantly the case in linguistic typology), a genealogical classification of the world’s languages (e.g. Ruhlen ; Lewis et al. ) can be made use of. If the classification system is areal, the world’s languages 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 6 S A M P L I N G P R O C E D U R E S

may be placed into different geographical areas (e.g. Africa, Eurasia, North America, South America, or even smaller areas), depending on where they are (primarily) spoken, and languages can then be drawn from these areas in order to construct a sample. The basic point of stratification is to ensure that each of the recognized categories, be it a genealogical group or geographical area, contributes a number of languages to the sample in proportion to its size (in comparison with other categories). For instance, in a sample of  languages, a language family that accounts for % of the world’s languages will contribute ten languages (  %) to the sample, in a sample of  languages, the same family will contribute ten languages (  %), and so on. This sampling procedure is based on what may be termed proportional representation. There is, however, a slight complication to proportional representation in language sampling, because different language families may have different degrees of genealogical diversity or depth. For instance, two given language families may have exactly the same number of languages, e.g. % of the world’s languages. But one of these language families may have more complicated internal genealogical diversity or depth (e.g. more genealogical (sub)subgroupings) than the other language family does. In a situation like this, simply selecting an identical number of languages from the two families will not do the trick, because we are more likely to draw more of the same from a genealogically less diverse language family than from a genealogically more diverse language family. Thus, it is also important to take into account the genealogical diversity or depth of language families when constructing a language sample. In other words, weighting should be given to the level of genealogical diversity or depth. One possible way of achieving this is to set an arbitrarily controlled time depth of genealogical relatedness and determine how many genealogical groups each of the world’s language phyla (or stocks)—as have been established for the world’s languages (e.g. Ruhlen ; Lewis et al. )—may contain.7 The number of genealogical groups within a phylum will then be taken to be a measure of the genealogical diversity or depth of that phylum. This is how it works. The world’s languages are assumed

7

A stock or a phylum is a putative genealogical grouping larger than a language family; a stock or a phylum may contain more than one (well-established) language family.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

to have had a certain number of genealogical groups at a time depth of , years. Put differently, each of these genealogical groups was an individual language about , years ago, and has since split into descendant languages, as spoken today. (The time depth of , years is, needless to say, arbitrary, but it intends to reflect genealogical diversity within phyla in some way.) For instance, the world’s languages can be put into sixteen different phyla, with an aggregate number of  genealogical groups (Bell ). In this particular classification, the Niger-Kordofanian and Amerind phyla contain  languages each. However, the genealogical diversities of these two phyla are not comparable, with the former being much less genealogically complicated than the latter. The Niger-Kordofanian phylum is allocated forty-four genealogical groups, while the Amerind phylum is estimated to contain  genealogical groups. The Amerind phylum is more than three times as genealogically diverse as the Niger-Kordofanian phylum. The Niger-Kordofanian phylum will then be represented in a given sample at the ratio of / (or .%), whereas the Amerind phylum will be given a much larger representation in the sample at the ratio of / (or .%). These same ratios of representation will apply regardless of the size of the sample. In a sample of  languages, for example, the Niger-Kordofanian phylum will contribute about nine languages (  /), whereas the Amerind phylum will be represented by about thirty-one languages (  /). Basically, this is a top-down approach in that the size of a sample is normally predetermined, and each language phylum is proportionally represented in the final sample according to its share of the total number of genealogical groups. A sample can be created in a similar way when the world’s languages are stratified into different geographical areas. Thus, no disproportionate number of languages should be selected from the same geographical area for a given sample. To that end, the whole world can be carved up into a number of geographical areas, from each of which only a representative number of languages can be selected (e.g. Tomlin ). For instance, suppose a given geographical area contains , languages. That geographical area will contribute about fourteen languages to a -language sample (i.e.   ,/,, assuming the total number of the world’s languages presented in Lewis et al. ()). In this way, the investigator can make a deliberate attempt to refrain from choosing too many languages from the same area. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 6 S A M P L I N G P R O C E D U R E S

5.6.2 Independence of cases in proportional representation While proportional representation in language sampling may enable one to construct a sample representative of the world’s languages in terms of genealogical relatedness, contact-mediated similarities, or other parameters of stratification (e.g. cultural), the question remains as to whether it satisfies the independence of cases to the fullest extent possible. First of all, selecting languages in proportion to the number of genealogical groups may ensure the genealogical distance among the world’s language phyla as are currently understood to exist, but languages belonging to the same phylum are still genealogically related, and drawing multiple languages from each of the phyla may not be able to escape completely the criticism that more of the same has been included in a sample, however representative genealogically, and/or areally, of the world’s languages that sample may claim to be. In fact, if we wish to uphold the independence of cases to the fullest extent possible, we should not include more than one language from each language phylum. The same comment applies to geographical areas: no more than one language should be drawn from a single geographical area to construct a sample. Language sampling carried out strictly along these lines is found in Perkins (, ), who deals with the independence of cases by adding a ‘qualitative’ dimension to language sampling, although his basis of stratification is primarily cultural and secondarily genealogical. Perkins () derives a sample of fifty languages by first choosing independent cultures from the population of all cultures and then determining the language spoken by the people of each chosen culture.8 He further informs his selection of the fifty languages/cultures by ensuring that they not be (substantially) close to each other in terms of genealogical and cultural relatedness. Perkins is acutely aware of the fact that in a large sample it is unavoidable to include closely related languages/cultures (see above). But this should not be permitted in all statistical procedures because these procedures ‘presuppose the independence of 8

Perkins () uses a universe of cultures in order to derive a sample of languages because ‘it is more reasonable to expect that linguistic materials exist for cultures that have been studied by ethnographers than those that have not[;] the appearance of a culture on Murdock’s () list makes it considerably more likely that the corresponding linguistic materials exist than for languages chosen from a language list [e.g. Voegelin and Voegelin (), Ruhlen ()]’ (: ).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

cases’ (Perkins : ). He thus establishes his sample of fifty languages in such a way that no two languages/cultures are from the same language family or the same cultural area. Therefore, in this type of sampling method, one and only one language will be selected for each language family regardless of the size of language families. In other words, there is no ‘qualitative’ difference between a language family of ten languages, and a language family of , languages because they will each contribute only one language to the sample. Perkins’s sampling method is a significant improvement over previous ones in that it is specifically designed to maximize the genealogical (and also cultural) distance between sampled languages. In other words, Perkins’s proposal is a welcome step in the right direction of meeting the requirement of the independence of cases in language sampling. Moreover, as Whaley (: ) notes, a sample of fifty languages is also practically manageable in size for a single researcher, although it may not be possible to construct a sample containing as many as fifty languages when the basis of stratification is areal, because geographical areas can be ‘extremely large’, even continental in size (e.g. Eurasia as a single linguistic area) (: ).9 There are two fundamental issues with Perkins’s sampling method, however. First, discounting genealogical diversity or depth within language families runs the risk of failing to capture typological diversity in the world’s languages adequately. Put differently, genealogical diversity or depth may entail typological diversity, unless proven otherwise. Thus, selecting no more than one language from each language family may not make it possible to recognize the full range of typological diversity attested within that language family. The seriousness of this weakness will loom large if we are constructing a variety sample, the primary purpose of which is to identify all structural types used to express a given meaning or function in the world’s languages. Second, in view of the reality of remote genealogical relatedness or contact, it may never be entirely possible to maintain the independence of cases even if a sample is allowed to contain no more than one language from each language family. This is simply because there is no absolute In other words, there may be fewer than fifty geographical areas that can provide the basis for areal stratification. A sample of fewer than fifty languages, in turn, may not be large enough to generate statistically valid inferences about the nature of human language. 9



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 6 S A M P L I N G P R O C E D U R E S

guarantee, in the present state of our knowledge, that the languages in a Perkinsian sample will all be completely genealogically independent of each other. A similar comment can be made about language contact in the remote past. What this means is that it may never be possible to construct a sample of fifty genealogically or areally independent languages pace Perkins (, ). 5.6.3 Having the best of both worlds: structural variation across and within phyla What Perkins’s (, ) sampling proposal attempts to achieve is well justified: it is important to ensure that a sample not contain more of the same (especially at the expense of the different). In pursuing this desideratum, however, we may lose sight of structural diversity within genealogical groups. One language, no matter how typical of the phylum that it belongs to, may not be enough to provide us with a full picture of typological diversity within that phylum. This will be a serious problem when the aim is to investigate structural variation in the world’s languages. One obvious solution is to retain Perkins’s insight, that is, counting only one language per phylum (minimal representation), while taking into account the genealogical diversity or depth within each phylum at the same time. In other words, structural variation across phyla is handled by means of Perkins’s minimal representation, while structural variation within phyla is captured by means of proportional representation. This is essentially what Rijkhoff et al. () propose for creating a variety sample. They (: ) note that the best way to avoid genealogical bias is to ensure that all languages in the sample come from different phyla. This is in full agreement with Perkins’s position that a sample must include at least one representative from each phylum so that there is minimal representation of all phyla in that sample. They point out, however, that this way to control for genealogical bias will only give rise to a sample of fewer than thirty languages—assuming that there are twenty-seven phyla (à la Ruhlen ). Not surprisingly, such a small sample does not make a good variety sample, the primary goal of which is ‘to maximize the amount of [structural] variation in the data’ (Rijkhoff et al. : ). A good variety sample must reflect the greatest possible structural diversity so that even cases of the rarest type can have a chance of representation 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

(Rijkhoff et al. : ). Clearly, a sample consisting of fewer than thirty languages will be unlikely to achieve this. Rijkhoff et al.’s solution couldn’t be simpler: to ensure minimal representation of all phyla, and to maximize structural variation at the same time by converting the internal structure of the language family tree into a numerical value. That is, the internal structure of genealogical trees is taken to be a measure of genealogical diversity. The more complicated the internal structure is, the greater the genealogical diversity. This operationalization of genealogical diversity is intended to reflect Bell’s () observation that ‘[i]f the strata [e.g. genealogical groups] are not equally homogeneous, some increase in sampling efficiency may be achieved by weighting . . . samples according to strata variability’. This is how Rijkhoff et al.’s algorithm works. The diversity value (or DV) is computed on the basis of the number of nodes at the intermediate levels between the top node, and the terminal nodes at the bottom end in a language family tree. Rijkhoff et al. () recognize twenty-seven phyla in the world’s languages, based largely on Ruhlen (). Note that Rijkhoff et al. (: ) treat nine language isolates as phyla, because they are most probably the last surviving member languages of extinct large language families. Thus, nine of the twenty-seven phyla are represented by language isolates. Once the DVs of the twenty-seven phyla have been worked out, they will invariably be used to decide how many languages must be selected from each phylum in order to construct samples of predetermined sizes, be they , , , or ,. These individual DVs add up to the aggregate DV of .. The rest of the sampling procedure goes through three phases. The first phase identifies those phyla that could not be represented in a given sample on the basis of their DVs alone. For instance, in a -language sample, each of the phyla without sufficient genealogical diversity or depth (i.e. with DVs lower than . (or ./)) is allocated one language. This is to ensure that structural variation across phyla be captured in the sample. There are fourteen such phyla (i.e. fourteen languages). Note that these phyla naturally include the nine language isolates, each of which has a DV of zero. The second phase involves the allocation of the remaining languages (i.e. the total number of sample languages minus the languages already allocated in the first phase (  = )). This allocation is done on the basis of DVs. For instance, the Amerind phylum has a DV of ., receiving an allocation of eighteen languages (i.e.   ./.). This particular phase is designed to capture 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 6 S A M P L I N G P R O C E D U R E S

structural variation within phyla. The final phase is basically tidying up loose ends, namely the effects of rounding adjusted so that the total number of languages to be selected will be brought in line with the intended sample size. Rijkhoff et al.’s (: , ) operationalization of phyluminternal diversity is claimed to be an improvement over Bell’s () age criterion or genealogical groups in that their DV computation ‘can be seen as an objectivization of Bell’s language groups’, the basis of which is the time depth of , years. They argue that Bell’s arbitrary time depth is difficult to apply equally to all phyla, especially when the histories of many phyla are not well understood due mainly to lack of documentation. This is a fair point to make but then Rijkhoff et al.’s objectivization of Bell’s age criterion hardly escapes the same criticism and does in fact highlight one of the intractable problems with all sampling procedures based on genealogical stratification. The way language family trees are constructed is due as much to lack of understanding of, or uncertainty about, internal genealogical relations as to actual internal diversity. In fact, this point has not at all been missed completely by Rijkhoff et al. when they (: –) point out: ‘[w]hat strikes us is that the number of languages (t) per non-terminal (nt) and preterminal (pt) node is low for relatively well-explored phyla like Indo-Hittite (ratios . and .), and rather high for phyla for which our knowledge still leaves much to be desired, such as IndoPacific (ratios . and .).’ The reason for this difference is that phyla whose internal genealogical relatedness is well understood tend to have more intermediate groups recognized, with their trees being more hierarchical or less flat, whereas phyla whose histories cannot easily be accessed tend to contain fewer intermediate groups, thereby resulting in flatter or less hierarchical trees. The problem is exacerbated by the undeniable fact that the genealogical classifications on the basis of which many samples have been set up are in turn based on different sets of criteria being applied to different genealogical groupings. Rijkhoff et al. () rely heavily on the genealogical classification list provided in Ruhlen (), with DVs computed on the basis of the internal structure of language trees. But how can we lay claim to comparability of the internal structure of language trees across the phyla when the conceptual basis of that internal structure differs from one genealogical grouping to another? These, of course, are not criticisms levelled at Rijkhoff et al.’s sampling approach per se but 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

rather they are intended to emphasize the problem succinctly summarized by Bybee et al. (: ): [f]irst, there are many languages for which genetic [= genealogical] classification is unknown, unclear, or under dispute. Second, more is being learned each day about genetic relations, so that some of the information published in  may be incorrect. Third, different criteria were used in establishing the groupings in different parts of the world. In some cases, genetic grouping is based on extensive historical documentation and historical-comparative work (as in the case of Indo-European languages); in other cases, the groupings are based on lexicostatistical surveys; and in still others, it is admittedly only a geographical region that is being identified as a group.

Thus, Rijkhoff et al.’s objectivization of Bell’s age criterion can only be as sound as the genealogical classification on which it is based. It must also be borne in mind that what is at issue is not objectivization of the internal diversity of phyla per se but objectivization of the internal structure of language family trees with their weaknesses, flaws, gaps, and all because, as Rijkhoff et al. (: , , passim) themselves reiterate, only the internal structure as represented or reflected in the form of a tree diagram is ‘exploited to measure linguistic diversity among [genealogically] related languages’. Nonetheless, we may like to concur with Bybee et al. (: ) that a genealogical classification list such as Ruhlen () or Lewis et al. (), ‘provides an objective basis for sampling that was established independently of any hypotheses that [we set out] to test’.

5.7 Testing independence of cases at a non-genealogical level It is generally accepted that the Comparative Method can only take us ,–, years back into the past. Human language is generally thought to have reached its current level of development sometime between , and , years ago (e.g. Cavalli-Sforza : ). Thus, it comes as no surprise that we are not in a position to say much about genealogical relatedness—or language contact for that matter— in the remote past. There is far more that we do not know about linguistic prehistory than we know; what we think we know about linguistic prehistory is probably pure speculation, if not completely wide of the mark. It is one thing to acknowledge this reality but it is quite another matter to make an attempt to minimize, in any way possible, the effects of remote genealogical relatedness or contact on language samples and, ultimately, on the validity of research findings. This minimization is of 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 7 T E S T I N G I N D E P E N D E N C E O F C A S E S A T A N O N - G E N E A L O G I C A L L E V E L

crucial importance because, as has been pointed out on more than one occasion, we do not want to misinterpret the effects of genealogical relatedness or language contact as linguistic preferences. Otherwise, we will end up studying more of the same (i.e. properties shared through inheritance or contact) at the expense of the different (i.e. properties characterizing the nature of human language). The purpose of controlling for genealogical or areal bias is, as discussed in §., to ensure that the independence of cases be maintained to the greatest extent possible, if that is possible at all. Typically, language sampling methods attempt to address the independence of cases at a genealogical level (but cf. Perkins , ). To that end, the number of languages to be selected for each genealogical group, e.g. phylum, for inclusion in a given sample is determined on the basis of a genealogical classification of the world’s languages. This way, no inordinate number of languages will be selected from each genealogical group so that ‘inflationary effects stemming from genealogical relatedness’ can be controlled for (Bickel a: ). However, as already explained, remote genealogical relatedness—or remote language contact—cannot be detected by means of the Comparative Method or other historical-linguistic methods. The furthest we can reach back into the past through historical-linguistic methods is under , years, and consequently, currently available genealogical classifications provide us with little or no insight into linguistic prehistory. For this reason, Dryer (, , and ) decides to deal with the independence of cases at a non-genealogical level: use of very large geographical areas—that is, continental or almost continental in size. Though Dryer calls his large geographical areas ‘linguistic areas’, they are essentially geographical areas, not really akin to what is conventionally understood as a linguistic area (i.e. Sprachbund). Dryer (: ) defines his ‘linguistic area’ as ‘an area in which at least one linguistic property is shared more often than elsewhere in the world to an extent which is unlikely to be due to chance, but which is probably due either to contact or remote [genealogical] relationships’. In Dryer’s method, the world is divided into five large geographical areas: Africa, Eurasia, Australia–New Guinea, North America, and South America.10 10 In his subsequent work (), Dryer removes South East Asia and Oceania from Eurasia and treats them as a separate macroarea. Thus, in Dryer () there are six, not five, linguistic areas.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

These ‘linguistic areas’ or macroareas hereafter—to avoid confusion with Sprachbund—are then assumed to be independent of one another, because the divisions between them ‘are rather well defined physically’ (emphasis added) (Dryer : ). In other words, there are easily recognized geographical bottlenecks (e.g. narrow land passages between Africa and Eurasia or between North America and South America) or boundaries (e.g. vast oceans surrounding Australia–New Guinea) that demarcate the macroareas clearly. This is why Dryer’s macroareas are first and foremost geographical, not linguistic. Now, it is a reasonable assumption that the relationships, genealogical or otherwise, between these large areas will be minimal (see Holman et al.  for a statistical demonstration of how structural similarities among languages, genealogically related or not, tend to decrease with increased geographical distance). Put differently, languages within each macroarea are likely to share more structural properties, whether through inheritance or contact, than languages from different macroareas are. The level of the macroareas is precisely where the independence of cases is to be addressed in Dryer’s sampling method. Thus, the purpose of using the five macroareas is to control not only for genealogical bias that may go well beyond the reach of the Comparative Method but also for areal bias of the proportion that has not hitherto been understood to have a bearing on language sampling—the underlying cause of which may possibly be partly or largely genealogical (Dryer : ). This way, Dryer’s sampling method attempts to avoid ‘statistically significant results [that] may simply reflect areal [or remotely genealogical] phenomena rather than linguistic preferences’ (: ). The ingenuity of Dryer’s approach lies precisely in the fact that the independence of cases is sought—to the extent that this is possible—at the level of the five macroareas, not at that of genealogical groups, the number of which may be unwieldy for purposes of statistical manipulation. His technique thus makes it possible ‘to take into consideration all of the data at hand’ (Croft : ), while dealing with only the five macroareas for purposes of controlling for genealogical and areal biases. Indeed, it makes much sense to take advantage of something other than a genealogical classification to control for the effects of remote genealogical relatedness or contact since genealogical classifications are plainly inadequate for looking back into linguistic prehistory. The foregoing, however, does not mean that Dryer dispenses with genealogical grouping completely. In point of fact, he invokes the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 7 T E S T I N G I N D E P E N D E N C E O F C A S E S A T A N O N - G E N E A L O G I C A L L E V E L

concept of a genus, analogous to Bell’s () genealogical group. Dryer’s genera are supposed to be comparable to the sub-families of Indo-European, e.g. Germanic or Romance—or a time depth of , to , years. In Dryer’s sampling method, the languages of a given sample are first placed into  genera in total, largely in line with the genealogical classification of Ruhlen ().11 These genera are then put into one of the five macroareas, depending largely on where their languages are primarily spoken in the world. Dryer’s use of genera has two important functions.12 First, the most severe genealogical bias is claimed to be controlled for at the level of genera particularly because ‘[i]n some areas of the world, these genera are the maximal level of grouping whose [genealogical] relationship is uncontroversial’ (Dryer : ). In other words, the level of genera represents the range of time depth where historical-linguistic methods can provide us with the most conservative account of the genealogical relatedness among the world’s languages. Beyond this level, it may be anyone’s (educated) guess and also generate—more often than not, heated—controversies (e.g. Campbell’s () response to Greenberg’s () Amerind macrofamily). Second, Dryer counts genera, not individual languages, in order to ascertain whether a given structural property (e.g. SOV) or correlation (e.g. between VO and Preposition) is a universal preference. Recall from §. that the high frequency of a given property cannot be equated with a linguistic preference for that property. Just because some property is attested in a great number of languages does not necessarily mean that it represents what is preferred in human language. For instance, the majority of the languages exhibiting the property may come from a small number of large language families. Thus, regardless of their size, genera are treated in Dryer’s sampling method as if they were single languages (but see below). For further illustration of how Dryer’s method actually works, the preference of SOV over SVO can be briefly discussed here (for detailed discussion, see §..). It is commonly thought (e.g. Tomlin ) that there is no statistically significant difference between the frequency of In his actual sample of  languages, however, Dryer (: –) only operates with  genera. 12 Exceptions include Semitic languages, which are treated as part of Africa as ‘their [genealogical] relationships go in that direction’, and the Chibchan languages of Central America, which are treated as belonging to South America, probably for the same reason (Dryer : ). 11



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

SOV and that of SVO in the world’s languages. Dryer (: –), however, provides clear evidence in support of SOV being a linguistic preference over SVO, as exemplified below (NB: Afr = Africa, Eura = Eurasia, A–NG = Australia–New Guinea, NAm = North America, SAm = South America). ()

Afr Eura A–NG NAm SAm Total SOV ⃞ SVO 

⃞ 

⃞ 

⃞ 

⃞ 

 

The numbers in both rows represent the number of genera exhibiting SOV or SVO for each of the five macroareas. The larger of the two figures for each of the columns appears in a box. Note that Dryer (: ) allows double counting for a particular genus if that genus contains a language of both types (e.g. both SOV and SVO, as in Semitic).13 However, he remarks that this kind of double counting is not usual, since languages within genera are generally typologically similar. Although the difference between SOV and SVO in Africa is far from significant, there does clearly emerge a generalization to the effect that SOV outnumbers SVO by five macroareas to none, thereby confirming that there indeed is a linguistic preference of SOV over SVO. The logic here is that, since the five macroareas are (reasonably) assumed to be independent of one another both genealogically and areally, there would only be one chance in thirty-two—one chance in sixty-four in Dryer (), wherein six macroareas are recognized, with South East Asia and Oceania teased out from Eurasia—for all five macroareas to display the given property if there were no linguistic preference for the more frequently occurring language type (Dryer : ). Note that Dryer (: , ) takes a conservative attitude towards interpreting his results, e.g. (). Only if and when all the five macroareas conform to the hypothesis being tested, that hypothesis is considered to be a universal preference. For instance, if only four of the five conform to the hypothesis, he prefers to speak of ‘trends’, short of statistical significance. By his standards, then, some of the language universals that other linguistic typologists would happily accept will have to be relegated to trends. 13

The Semitic genus also contains languages with VSO order. Thus, if VSO order were included in (), Semitic would be counted three times.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 7 T E S T I N G I N D E P E N D E N C E O F C A S E S A T A N O N - G E N E A L O G I C A L L E V E L

Attractive as Dryer’s sampling method may seem, there may be a few problems worth discussing here (also see Song : –). One of these problems concerns selection of sample languages. It is not entirely clear how (and which) languages are chosen for each genus. For instance, when setting up her sample Nichols (: ) avoids languages considered by specialists as linguistically divergent or atypical of the family so that the language(s) chosen can be representative of the whole family. There is nothing in Dryer’s discussion concerning the actual selecting of sample languages. Is there a set of uniform criteria for selecting sample languages for genera? Related to this is also the issue of the minimum or maximum number of languages to be selected for each of the genera. Unfortunately, Dryer is not explicit on this point either. It may thus depend on which—and how many— languages are to be chosen whether a given genus may turn out to be of type X, type Y, or whatever (see Dryer : ). It seems that too much is left to chance. More detrimental to Dryer’s method is, however, that the languages used to make up genera ‘are not randomly chosen so that independence is not assured by his method but is undercut by the prior problem of lack of randomness used to choose his languages’ (Perkins : ). This may be a serious problem from a statistical point of view alone. Second, Dryer (: ) claims that in his sampling method counting genera rather than individual languages makes it possible to control for the most severe genealogical bias ‘since languages within genera are generally fairly similar typologically’. However, this assumption is somewhat questionable in view of a great deal of variation that exists between different linguistic properties in terms of innovation or conservatism. For example, as Dryer (: ) himself acknowledges, basic word order properties change fairly easily, whereas morphological ones may be far more resilient to change. In other words, the assumption that languages within genera are generally fairly similar typologically may not apply equally to all different types of linguistic property. Prior to the adopting of that assumption, we may then be well advised to ascertain first whether or not the linguistic property being studied is a relatively stable one over time and/or in the context of contact (e.g. Nichols  for a programmatic study of stability and diversity in language). Furthermore, we need to find out how stable a given linguistic property has to be in order to uphold the assumption in question. Instructive in the present context is the American linguist 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

Kenneth Pike’s (: ) candid comment on the Gur group in the Niger-Congo family: This material from Bariba in Dahomey seems so different from Dagaari, Vagala, and Kasem of Ghana and from Mbembe and Degema of the lower part of Nigeria, that I rechecked . . . Somehow, the cultural universals of causation . . . would have to be expressed in them also. Had the Bariba type of data been overlooked in these other languages, or did it in fact not exist?

Third, in Dryer’s work which genealogical groups should be counted as genera is based on his ‘own educated guesses’ (: ). Thus, other researchers may make different decisions as to the genus status of certain genealogical groups. In fairness to Dryer, however, this cannot be said to be a problem unique to his sampling method. Indeed, there are many languages whose genealogical classification is ‘unknown, unclear, or under dispute’ (Bybee et al. : ). Moreover, different scholarly traditions may apply different sets of criteria to genealogical groupings; some historical linguists may be so-called ‘splitters’ (i.e. favouring finer genealogical groupings), and others may be so-called ‘lumpers’ (i.e. favouring broader genealogical groupings). All designers of language samples will have to live with this problem, which is only emblematic of the state of the art in genealogical classification. Nonetheless, it is not unfair to say, as Dryer (: ) himself admits, that ‘the lack of solid criteria for determining genera is a weakness in [his] methodology’. Fourth, Bickel (a) takes issue with Dryer’s ‘blind’ double (or multiple) counting for genera that possess two (or more) different structural types under investigation, e.g. SOV vs SVO, as in () above. For instance, in (), while it usually does not happen, Dryer does allow certain genera, e.g. Semitic, to appear under both SOV and SVO. Bickel (a), however, calls for more care to be taken when deciding whether a given genus may be of one or the other type or even both types: it is not just a matter of a genus exhibiting both structural types, but really a matter of how that genus exhibits both structural types. For instance, genus X may have an equal distribution of SOV and SVO, that is, % of its member languages with SOV, and the other % with SVO. In this case, it is clear that both structural types must be recorded for the genus in question. In contrast, genus Y may have a skewed distribution in favour of SOV, that is, % of its member languages with SOV, and the remaining % with SVO. In this case, it is highly 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 7 T E S T I N G I N D E P E N D E N C E O F C A S E S A T A N O N - G E N E A L O G I C A L L E V E L

likely that the skewing is due to common inheritance (or possibly due to contact). However, whatever the reason for the skewing may be, it is something that must be controlled for in language sampling. In other words, genus Y must be allocated only one structural value, namely SOV, not two as is the case with genus X. To wit, Dryer’s sampling method is in need of refinement at the level of counting genera. Moreover, Bickel points out that (more or less) an equal or skewed distribution may continue up to the level of phyla or down to the level of subgenera or languages. Thus, Bickel (a: –) proposes that structural properties be tested recursively at each genealogical level and that genealogical groups, be they phyla, genera, subgenera, or even languages, be admitted into the sample ‘only if they are [statistically] significantly distinct from their sister groups at each level with regard to the [structural properties] of interest’. In other words, Bickel’s sampling method requires that the genealogical levels be evaluated in a top-down manner, starting from phyla all the way down to languages, in terms of structural values being investigated and that their counting be carried out only at the genealogical level where there is no or little structural variance in statistical terms. Once the genealogical level of structural invariance is reached, one language may be selected for inclusion in the sample as being representative of the languages under that genealogical level. It was pointed out earlier that Dryer is not very clear about how languages are actually selected for his genera, and Bickel’s proposed procedure seems to address this problem adequately. Bickel also develops a general algorithm or mathematical procedure that can be used to create a genealogically controlled sample. Bickel (a) is certainly a refinement of Dryer’s sampling method but it needs to be pointed out that Bickel’s proposal calls for structural values for a large number of languages to be known prior to creating a language sample (Bakker : ): the researcher must have a good understanding of the structural diversity of all genealogical groups to be able to say, for instance, that genus X has an equal distribution of SOV and SVO, genus Y has a skewed distribution of SOV over SVO, and so on. That is, we will not be able to determine whether a given genus has a skewed distribution of SOV over SVO or an equal distribution of SOV and SVO unless we have already undertaken a thorough investigation of that genus—and, needless to say, of all remaining genera in the world—in terms of the structural values in question. Moreover, this kind of prior 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

knowledge may not be possible in the case of unexplored or little understood structural properties.

5.8 Typological distribution over time: was it different then from what it is now? While Dryer’s use of a non-genealogical basis for addressing the independence of cases is laudable, the genealogical or areal distance between the five—or six in his later work—macroareas may not be great enough to remove the effects of remote genealogical relatedness or contact completely from sampling processes. There may always be uncertainty about remote genealogical relatedness and contact, which refuse to reveal themselves. For instance, Dryer’s technique cannot help to eliminate the genealogical or areal bias that would have affected the linguistic situation prior to the time depth of his genera, i.e. ,–, years. Thus, all that can perhaps be said about Dryer’s attempt is that it is as good as it can get. This may sound somewhat defeatist but the reality is that the inability to deal with the independence of cases adequately is ‘an essentially inescapable problem, and can only be surmounted by obtaining evidence for typological explanations from other sources of data [which do not exist at present and probably never will]’ (Croft : ). That said, however, the failure to meet the requirement of the independence of cases in language sampling would be an inescapable problem if the current language population (i.e. the languages of the present world) were typologically dependent—to a statistically significant extent—on the initial language population (i.e. the languages of the past world). This scenario may well be within the realms of possibility, and it is precisely what Maslova () entertains when she proposes a mathematical model with which to solve the inescapable problem in question.14 Maslova () observes that there are two main historical processes that affect the initial language population (and hence its typological distribution as well): the birth-and-death process and the type-shift

14

Maslova’s mathematical model is based on the Feller–Arley (Markov) process (aka the linear birth-and-death process).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 8 T Y P O L O G I C A L DI S T R I B U T I O N O V E R T I M E

process. The birth-and-death process is about languages coming into being (i.e. languages descending from a single proto-language or dialectal varieties developing into separate languages) or languages becoming extinct (i.e. no remaining speakers), while the type-shift process concerns languages changing from one structural type (e.g. SOV) into another (e.g. SVO). These two processes play a crucial role in determining the language population as well as its typological distribution. Which of the two processes will have a greater impact at a given time depends on population size. Thus, in large populations birth-and-death effects will be negligible, but they will be dramatic for small populations. In contrast, in large populations any significant changes to the typological distribution will be imputed largely to type-shift processes, but in small populations the effects of type-shift processes will be outdone by those of birth-and-death processes. Maslova (: ) makes a further assumption that the language population at the initial time of its history was small, possibly consisting of a single proto-language only. Note that although she (: –) sets the time depth of her model at , years, Maslova’s proposal can in principle apply to the time interval between the current language population (i.e. the present time) and the earliest language population (i.e. ,–, years ago). Maslova (: –) argues that if universal preferences or language universals for that matter are meant to make any sense at all— that is, if they represent the nature of human language, not the effects of genealogical relatedness or contact—one also needs to assume that after a protracted period of time, the probability of languages having certain structural types no longer depends on the structural types that those languages or their ancestors had at the time of their inception. Otherwise, the structural types at the time of their inception ‘will last forever’ (Maslova : ). Moreover, once this independence between the current language population and the initial language population has been achieved, the typological distribution of languages over the structural types must also become stable. Maslova (: ) calls this typological stability ‘stationary distribution’. The way the stationary distribution works is that the total number of languages changing from a given structural type, within any time interval, is more or less equal to the total number of languages changing into that type within the same time interval. In other words, languages may change into and out of structural types, but these changes eventually cancel themselves out so that the overall typological distribution remains more or less stable. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

Recall that Dryer’s sampling method operates with genera, i.e. genealogical groups at a time depth of ,–, years. There are  genera in Dryer’s work, as opposed to just over , languages spoken in today’s world. Put differently, ,–, years ago each of these genera would have been an individual language; the earlier language population may thus have been far smaller than the current language population. Maslova (: ) reasons that, given the size difference between these two language populations (i.e. small vs large), whatever typological changes may have taken place between , and , years ago and the present time must be attributed to type-shift processes, not to birth-and-death processes. There are two reasons for this. First, in large populations, as pointed out earlier, any (statistically) significant changes to their typological distribution are (claimed to be) due largely to type-shift processes—with birth-and-death effects being negligible. Second, since the disparity between the typological distributions of these two language populations has been brought about by type-shift processes, the moving away of the current language population from the typological distribution of the earlier language population can only be a move towards the stationary distribution. From this, Maslova (: ) draws the inference that the typological distribution of the current language population will take us closer to the discovery of properties preferred in human language than any of its earlier counterparts. In particular, Maslova compares Tomlin’s () frequency of SVO languages (i.e. .% of the  languages in his sample) and Dryer’s () frequency of SVO languages (i.e. % or  in his sample of  genera [read:  languages ,–, years ago]), attributing the difference of nearly % between the two frequencies to type-shift processes. In other words, Maslova believes that Tomlin’s method of counting individual languages can reveal the status of SVO as a preferred word order in human language while Dryer’s method of counting genera does not. What this suggests is that contrary to Dryer’s (: –) position, SVO is, as indeed concluded by Tomlin (), as much a preferred structural option as SOV is (see §.). If it is anywhere near tenable, Maslova’s provocative proposal promises to be able to obviate the inescapable problem of meeting the requirement of the independence of cases in language sampling. If Maslova’s reasoning is correct, we should not be overly concerned about the effects of remote genealogical relatedness or contact, because they are negligible or at least not (statistically) significant enough to 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 8 T Y P O L O G I C A L DI S T R I B U T I O N O V E R T I M E

distort the outcome of typological research. According to her model, the typological distribution of the current language population should thus provide a better picture of the stationary distribution of human language than Dryer’s genera-based approach does; the typological distribution of the current language population embodies universal preferences, not the effects of ‘historical forces of no relevance for linguistics’ (Hawkins : ). Like other mathematical models, however, Maslova’s relies heavily on a number of assumptions that are in need of far more justification than she has provided. First, while it is meant to cover the interval between , years ago and the present time, Maslova’s proposal can in principle be applied to the interval between the inception of modern human language and the present time. The latter time interval (i.e. ,–, years) is indeed very long, and long enough to be able to aver that human language has undergone an enormous number of type-shift processes (as well as of birth-and-death processes). Maslova (: ) takes the difference between Tomlin’s frequency of SVO and Dryer’s counterpart to be owing to the effects of type-shift processes over the last ,–, years. This kind of type-shift process, if it can take place over such a short period of time (i.e. .– millennia), must have taken place innumerable times when the interval between the beginning of human language and the present time (i.e. – millennia) is taken into consideration. Thus, Dryer’s use of genera should also allow us to regard the typological distribution of the world’s languages ,–, years ago as being more or less equally close to the stationary distribution as Tomlin’s analysis of the current language population does. The sheer difference between ,–, and ,–, years is considerable enough to assume that the Maslovan stationary distribution has had (more than) enough time to establish itself among the world’s languages, probably multiple times over. It is very difficult to believe that anything close to the stationary distribution has been achieved only (once) in the last ,–, years. So why the difference of nearly % between Dryer’s and Tomlin’s frequency of SVO? The difference may well be, pace Maslova, due to the effects of remote genealogical relatedness or contact, as Dryer ( and also : –) argues. Second, the fact that there are  genera identified in Dryer’s sampling method does not mean that there were only  languages spoken in the world ,–, years ago. There may have been many 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

languages that—had they survived in the past—would have given rise to more genera than can be established for the present world’s languages. Related to this is the fact that one third of the phyla in the world are isolates (Rijkhoff et al. : ; Dryer : ; Bickel a: ). These singleton phyla, i.e. language isolates, are most probably the last surviving member languages of extinct—possibly large—language families. In other words, we cannot assume that the language population ,–, years ago was smaller than the current one. It could have been equally large, if not even larger (also Dryer : –). The thinking along these lines can actually be extended much further back in time, possibly close even to the beginning of modern human language. Thus, Maslova’s position that the typological distribution of the current language population provides a better picture of the stationary distribution of human language than Dryer’s genera-based approach seems to be questionable. The typological distribution of the language population at the time depth of Dryer’s genera could easily be as close to the stationary distribution as the current language population (e.g. Tomlin ). In other words, the typological distribution of the world’s languages ,–, years ago may well be due largely to the effects of type-shift processes, rather than to those of birth-and-death processes. Third, Maslova (: ) takes probabilities of languages coming into existence or dying to be ‘constants characterizing the birth-anddeath process in the entire population over a protracted period of time’.15 This is an unrealistic assumption, however, in so far as human languages are concerned. The entire history of humankind is full of catastrophic or cataclysmic events that wreak havoc on the human population all around the world (e.g. massive earthquakes, volcanic eruptions, tsunamis, mass murder in invasions or conquests, social upheavals). Such catastrophic events must have had an indelible impact on language populations, whether small or large. To treat the birth-and-death process as a constant is a simplified, if not simplistic, assumption to make (see Dryer : – for a similar view; but see also Dixon  and Trudgill ). More to the point, languages have always been spoken in their own physical, cultural, and historical contexts. The fact that both large and small genealogical groups coexist in the present world points to the fact that languages always appear, 15

This is, in fact, something that the Feller–Arley process stipulates: the birth rate and the death rate are constant.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

5. 9 C O N C L U D I N G R E M A R K S

live, and die in the context of different physical, cultural, and historical events. Otherwise, the world’s genealogical groups should now be more or less equally sized (assuming, of course, that the Maslovan stationary distribution has already been established in the current language distribution). To apply the model of constant birth-and-death processes to the entire language population would be to lose sight of these important contexts that invariably have a bearing on whether a language may split into separate languages or disappear into oblivion. Maslova’s () mathematical modelling of the typological distribution of the world’s languages is provocative, inspiring, and even reassuring (especially about the validity of universal preferences or language universals so far discovered). She has certainly raised a theoretically important question as to whether universal preferences or language universals discussed in the literature represent the nature of human language or the effects of ‘historical forces of no relevance for linguistics’. Maslova is of the view that the current language population provides better access to the stationary distribution than any of the earlier language populations. Be that as it may, those universal preferences or language universals remain to be tested against many more languages than they have so far been formulated on the basis of. Little is known about the majority of the world’s languages; on more than one occasion (e.g. §.), it has been pointed out that only % of the world’s languages have adequate descriptions for typological research (e.g. Evans and Levinson : ). To allude to the stationary distribution at this point in time may strike some of us as somewhat premature, not because something like it does not exist but because the empirical basis for characterizing the typological distribution of the current language population first needs to be strengthened to a far greater extent than hitherto possible.

5.9 Concluding remarks The independence of cases, a fundamental requirement in all statistical procedures, is near impossible to deal with in language sampling, primarily because linguistic prehistory is well beyond the reach of historical-linguistic methods, including the Comparative Method. This reality must be borne in mind and embraced when we create language samples in order to investigate the nature of human language. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

LANGUAGE SAMPLES AND SAMPLING METHODS

The past decades have witnessed a number of innovative sampling proposals that attempt to minimize the effects of genealogical relatedness, contact, and other factors (e.g. cultural). While they have explored different ways of minimizing genealogical or areal bias, these proposals are all unable to eliminate the effects of genealogical relatedness or contact completely from sampling processes. The truth of the matter is that we have no access to linguistic prehistory, because we cannot go back in time. Even if we could go back in time, we would certainly not be able to document all languages that have ever been spoken in the past! Recall that only % of the present world’s languages have been adequately documented for purposes of typological research. Moreover, we should always bear in mind that the current language population (i.e. the present world’s languages) itself is a (small) sample of all languages that have ever existed on this planet. The ineluctable conclusion to be drawn from this situation is that we must be mindful of the fact that our inability to satisfy the requirement of the independence of cases in language sampling is an inescapable problem that defies scientific solutions but we must at the same time make every effort to minimize the effects of genealogical relatedness or contact where possible and to the extent possible. In view of these limitations, it seems to be far more constructive to widen the scope of our research by studying more and more languages as well as to make every effort to document languages under threat of extinction than to be overly concerned with the effects of remote genealogical relatedness or contact. Linguistic prehistory is, and will remain, well beyond our reach, and there is nothing we can do about it.

Study questions 1. The World Atlas of Language Structures or WALS (Haspelmath et al. ) operates with two main language samples (Comrie et al. ): a language sample and a further sample of  additional languages. What type of sample (e.g. variety, probability, or convenience) are they? Do you think that these samples are appropriate for all the chapters in WALS? Explain why they are appropriate or inappropriate. 2. Bickel (a: ) writes of the database used in WALS (Haspelmath et al. ): Because the languages in WALS have been selected with many different purposes in mind (Comrie et al. ), some families are better represented than others. Also,



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

FURTHER READING

WALS maps containing more languages than the number of known families . . . necessarily represent some families (e.g. Niger-Congo or Austronesian) with many more datapoints than others. Therefore, any summary statistic of WALS raises the issue to what degree it is influenced by the size and nature of families in the database.

Discuss Bickel’s remarks in terms of genealogical and areal biases, and more generally, in terms of the independence of cases. Do you agree with him? 3. Create a -language sample on the basis of the table in Rijkhoff et al. (: ) which shows how many languages should be selected for each of the phyla in the world’s languages (in order to create a sample of that size), and see whether you are able to source from your own university library grammatical descriptions of one or more languages allocated to each phylum. Which phyla are easier or more difficult to source grammatical descriptions for? Which phyla are impossible to cover for your sample? 4. Compare the sample used in Stassen (: –, –) and that used in Stassen (: –, –, –), and discuss the differences in the size and nature of the samples, and in the sampling procedures employed, along the lines of the various discussions in this chapter. Further reading Bakker, P. (). ‘Language Sampling’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Bell, A. (). ‘Language Samples’, in J. H. Greenberg, C. A. Ferguson, and E. A. Moravcsik (eds.), Universals of Human Language, vol. : Method and Theory. Stanford: Stanford University Press, –. Bickel, B. (). ‘A Refined Sampling Procedure for Genealogical Control’, Sprachtypologie und Universalienforschung : –. Dryer, M. (). ‘Large Linguistic Areas and Language Sampling’, Studies in Language : –. Maslova, E. (). ‘A Dynamic Approach to the Verification of Distributional Universals’, Linguistic Typology : –. Perkins, R. D. (). ‘Statistical Techniques for Determining Language Sample Size’, Studies in Language : –. Rijkhoff, J. and Bakker, D. (). ‘Language Sampling’, Linguistic Typology : –. Rijkhoff, J., Bakker, D., Hengeveld, K., and Kahrel, P. (). ‘A Method of Language Sampling’, Studies in Language : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6 Data collection Sources, issues, and problems

6.1 Introduction



6.2 Grammatical descriptions or grammars



6.3 Texts



6.4 Online typological databases



6.5 Native speaker elicitation: direct or indirect



6.6 Levels of measurement and coding



6.7 Concluding remarks



6.1 Introduction Typically, linguists collect data by working with native speakers of the languages under investigation. This is by way of direct elicitation, e.g. asking native speakers how they express X in their languages. Linguists may also collect data by observing their language use, e.g. witnessing native speakers saying X instead of Y to people of higher status, or Y instead of X to people of equal or lower status. More frequently than not, linguists may also produce data in their own native languages, as is almost always the case in Generative Grammar (see §.). Working with native speakers is the primary method of data collection in linguistics, and data obtained in this manner are known as primary data. While working with native speakers is certainly an option, it is a method relatively uncommonly used in linguistic typology. The reason for this is simple: economy (i.e. time and money). Linguistic typologists work with a large amount of data from a large number of languages.

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6.2 G R A M M A T I C A L D E S C R I P T I O N S O R G R A M M A R S

It is, therefore, unrealistic to expect them to be able to consult native speakers of every language of interest. Imagine working with one or more native speakers of every language in a -language sample, let alone a sample of over , languages. This is not to imply that this method of data collection is not possible at all in linguistic typology. As will be shown later, linguistic typologists do sometimes rely on the primary method in combination with other more feasible ways of collecting data. It is not just economic considerations that rule out the primary method of data collection as impractical. Even if one has time and money to work with native speakers for each sample language, it may not always be possible to travel to a place where it is spoken (e.g. for political or safety reasons). Moreover, there may be no remaining native speakers for some sample languages. For these practical reasons, linguistic typologists rely largely on secondary data, gathered together from grammars or grammatical descriptions (§.), texts (§.), or online typological databases (§.), although they may choose to augment secondary data with data collected directly from native speakers (§.) (i.e. direct elicitation or use of questionnaires). 6.2 Grammatical descriptions or grammars It does not come as a surprise that the most common method of data collection in linguistic typology is to draw on grammatical descriptions or grammars—in book or journal article form. This is the most economical or quickest way of gaining access to data for a large number of languages in the shortest possible time span. Grammars tend to be readily available in print and, increasingly, also online (e.g. http://www. sil.org/linguistics/language-data; http://langsci-press.org). With good organization and access to a good university library, it is not too difficult to gather together data for hundreds of languages through this method in a reasonable period of time. Moreover, it makes sense to rely on grammars because no one linguistic typologist can possibly have the level of understanding of the many languages in her sample that specialists have. Grammars tend to be written by professional linguists or language specialists; they generally provide reliable and accurate data. This does not mean that there are no poor or inadequate grammars around. There is, in fact, no shortage of them, but more frequently than not, there may be more than one grammar written for 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

one and the same language. Thus, we may be able to choose the best one available on the market. That said, the most common problem with this indirect method of gathering data is that grammars are not always sufficient in detail and/ or broad enough in scope. Far more frequently than not, grammars may gloss over or fail to examine what we wish to study although this often depends on what grammatical phenomena are being investigated. For instance, information on basic word order, e.g. SOV, SVO, can probably be retrieved easily from most grammars, whereas information on the comparative construction ‘is often not found in even the most minute grammars’ (Stassen : ). This unfortunate situation is further illustrated by Song (), in which  languages were excluded from a database of  languages for lack of information on causatives in grammars, with further  languages being minimally used because of narrow coverage of causatives. This is not surprising, because grammars are not written only for linguistic typologists. They may also be written for other linguists, language teachers, or even so-called heritage language learners (i.e. learners who wish to learn their ancestors’ languages, not having acquired them as children). Grammatical phenomena such as causatives may not be of much interest to some users of grammars. For some languages, there may exist no grammars, and only texts are available (also see §.). These texts may not be glossed sufficiently enough—if glossed at all—to be easily amenable to linguistic or typological analysis. They may come with or without English translations. In a situation like this, we may have to work through texts by carrying out linguistic analysis by using dictionaries or other materials, if available. We may perhaps get lucky enough to find good examples in texts but, as Stassen (: ) laments, ‘one often despairs of the fact that two days of deciphering a grammatical text has not resulted in finding one good and clear example of the comparative construction’. Moreover, even if ‘one good and clear example’ has been recovered from a text, we cannot be completely sure whether it epitomizes the comparative construction in the language in question; it may be nothing more than an example of a marginally used construction that just happens to appear in the text (§.). Stassen’s plight is not uncommon in typological research and will be well appreciated by those who have the experience of collecting data based on unglossed texts. Finding that a grammar does not deal with the phenomenon of interest often proves to be more difficult and time-consuming than 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6.2 G R A M M A T I C A L D E S C R I P T I O N S O R G R A M M A R S

finding that a grammar does indeed deal with it. This is because one has to study a grammar thoroughly, from cover to cover, in order to establish that it does not discuss the phenomenon in question, whereas discovery of the phenomenon may not require perusal of a whole grammar. We may be fortunate enough to find relevant examples within the first fifty pages of a grammar, while we may have to read the whole grammar only to find that it contains no examples. Moreover, one cannot just rely on the table of contents and the subject index—if they are provided—to ascertain whether a grammar provides information on the phenomenon because, even if the table of contents or the subject index does not make mention of it, one or more good examples may be hidden in a most unlikely place in the grammar. To make matters worse, some grammars may be biased towards certain grammatical areas, e.g. morphology, providing no data whatsoever on linguistic typologists’ areas of interest, e.g. syntax. Fortunately, recently written or published grammars tend to be typologically informed enough to pay attention to various grammatical phenomena of typological interest, although this may also mean that grammatical phenomena that have so far received little or no attention in linguistic typology may not be given a similar level of attention.1 Thus, if we wish to investigate an unexplored or little-known structural property, we may still experience Stassen’s plight, even when we always make a point of using typologically informed grammars. Linguistic typologists may also rely on data collected by other linguists from grammars or grammatical descriptions. But, as Croft (: ) sounds a cautionary note, this kind of third-hand data may already be biased by the hypothesis or theoretical orientation of original analysts. For instance, what is analysed as X in a particular grammatical theory may not necessarily be something that linguistic typologists are willing to accept as X. Thus, a great deal of discretion must be exercised in using third-hand data. It may be a good idea to check such data, wherever possible, by referring to the original grammars or grammatical descriptions on which they are based. What this suggests is that it may be judicious to use third-hand data only to alert ourselves to the possibility that languages have a particular structural property in 1 The Association for Linguistic Typology recognizes the importance of writing good grammars by regularly giving awards to those who have produced high-quality grammars, whether in book or dissertation form (http://www.linguistic-typology.org/awards.html#Gabelentz).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

question. Moreover, the reliability of third-hand data has not proved to be particularly commendable because errors of citation are not unheard of, and can actually be repeated in subsequent works. Surprisingly, this kind of error is far from infrequent (see Mallinson and Blake : – for one such perpetuating error of citation).

6.3 Texts Grammars usually come with texts, which typically appear as appendices. Such texts tend to be of spoken language unless the language described is well codified. This tendency does not come as a surprise because many of the world’s languages do not have their own writing systems or have never used any writing system. Texts included in grammars are often accompanied by translations in English or other major languages in which grammars are written. They may also be glossed morpheme by morpheme, but this is not always the case. Typically, linguistic typologists may need to make use of texts because they cannot find examples of the structural property of interest in the grammars that include them. Since grammars provide only a small number of texts, if any, they cannot serve as a major source of data. For instance, A Grammar of Toqabaqita, Volumes  and  (Lichtenberk )—one of the most comprehensive and longest grammars— provides only two texts, which account for twenty-six pages out of the total of , pages (i.e. just under %). Two or even a handful of texts are not likely to feature particular structural properties, and not all grammars provide texts—the presence of texts in many grammars actually reeks of ‘tokenism’ (e.g. spoken discourse in language X looks like this), probably because publishers are unwilling to bear the costs for printing lengthy texts. Thus, texts appended to grammars may best be used in conjunction with their host grammars and other sources of data. Moreover, most texts appearing in grammars tend to be narratives, more frequently than not, mythical, religious, or literary in nature. So, while these texts may be of spoken language, they may not necessarily reflect the way that naturally occurring conversation works, especially because mythical or religious texts tend to have many times been passed, more or less in original [read: archaic] form, from generation to generation. The major difficulty in using texts for typological research lies in the fact that unlike grammars, they are not 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6.3 T E X T S

written—they are never meant to be!—in such a way to illustrate specific structural properties of typological interest. This makes it very difficult to use texts for purposes of cross-linguistic comparison. The heterogeneity of texts in terms of topics or themes makes it even more difficult to draw on them when investigating particular structural properties; it would be impossible to have access to texts in each sample language that contain data exemplifying the structural property under investigation. Even if a particular text contains apparent instances of a certain structural property, it may not be possible to determine, on the basis of that text alone, whether that property is systematic, regular, or primary in a given language. It may possibly be merely a minor property that just happens to be attested randomly in the particular text. This can only be determined by referring to grammars or other sources of data, including a large number of further texts. To wit, texts alone—as typically provided in grammars—may not be a useful or reliable source of data for typological research. However, there may exist one particular kind of text that can potentially be used profitably for cross-linguistic research. There are ‘identical’ texts written in many different languages, that is, texts considered to be translational equivalents of the original. Such translationally equivalent texts are known as parallel texts, and certain parallel texts are distributed across a relatively large number of languages. Cysouw and Wälchli (: ) call such widely distributed parallel texts ‘massively parallel texts’ (or MPT). The primary example of MPT is the Christian Bible, which has been translated in to the largest number of languages (Dahl : ). The Wycliffe Bible Translators’ website reports that as of  the New Testament has been translated into , languages, while the entire Bible has been translated into  languages. Moreover, there are , additional languages that have access to some portions of the Bible. This means that the Bible, as an MPT, is available, wholly or partially, in .% of the world’s languages. Two facts suggest that the Bible may make a good MPT for linguistic typology. First, Bible translations have been carried out not just for well-known or widely spoken languages but also for languages that are often not well represented in grammatical descriptions. Second, many Bible translations are readily available, although it may require some effort to get hold of copies of the Bible translated in to minority languages—university libraries normally do not hold Bible translations. There are also novellas, novels, or books that have been translated into 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

many languages, e.g. Antoine de Saint-Exupéry’s Le petit prince, J. K. Rowling’s Harry Potter series, or Vladimir Lenin’s State and Revolution. These MPTs may also provide cross-linguistic data for typological research.2 While the use of MPT is still in its infancy in linguistic typology (e.g. Stolz ; Wälchli  and ; Stolz et al. ), it is worthwhile to discuss this source of data in terms of strengths and weaknesses (Cysouw and Wälchli ). There are some clear reasons why one may like to make use of MPTs for typological research. First, the bestMPT, the Bible, has been translated into a large number of languages, reasonably well distributed genealogically and areally around the world.3 Second, once the researcher has decided on a specific structural property and located particular passages in the original where that structural property is attested, it is a straightforward matter to look at the corresponding passages in translated MPTs in order to carry out a typological analysis. The researcher does not need to examine the whole translation in order to find relevant data, because she already knows exactly where to look for them in MPTs, e.g. the Gospel of Mark : or Le petit prince ch. : first sentence. Third, MPTs may produce statistical data that may not be retrieved easily from grammars. For instance, Stolz (: ) reports that in MPTs of Le petit prince written in European languages, the frequencies of the translational equivalents of avec ‘with’ in the original French text cut across genealogical boundaries. For instance, languages that behave differently from genealogically related languages behave more like genealogically unrelated neighbouring languages (i.e. the influence of contact). This kind of frequency-based data may never be gleaned from reading individual grammars. More interestingly, the use of MPTs has revealed an areal ‘cline from the European Southwest to the Northeast, including a centre–periphery dichotomy’ (Stolz : ). To wit, analysis of MPTs may lead to the discovery of areal patterns of structural properties. Fourth, while not related directly to cross-linguistic research, it may be useful to compare different MPTs translated from the same 2

A useful database for sourcing MPTs is located at UNESCO’s Index Translationum (http://www.unesco.org/xtrans), which provides an international bibliography of books translated in the world. 3 There are areas in the world where Bible translators arrived too late as languages spoken in those areas had already died out or become moribund, e.g. the majority of the Australian languages.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6.3 T E X T S

original into one and the same language when investigating languageinternal variation. It may also be possible to compare MPTs of the same original from different time periods and study typological changes in one and the same language (e.g. SOV most frequent in the oldest MPT but SVO most frequent in the newest MPT). A word of warning, however, is needed especially when MPTs produced over a prolonged period of time, e.g. the Bible, are used. There may be multiple originals, themselves written in different languages and at different times, from which the translations have been undertaken. This is indeed the case with the Bible. What this means is that some of the structural differences between cognate MPTs may be ascribed directly to differences in the source languages from which they have been translated, not to differences between the target languages into which they have been translated. These strengths notwithstanding, there are some weaknesses associated with using MPTs for typological research. First, almost all MPTs are texts of written language, typically produced in standardized varieties or specialized registers. For example, the Bible tends to use a highly specialized register, very different from ordinary spoken language. There may also be the issue of different genres used in MPTs. Thus, some structural properties or tendencies attested in MPTs may well be specific to particular varieties, registers, or genres. Second, there is always the possibility that translated MPTs may have been influenced structurally by the language in which the original is written. Thus, there may be structural properties or tendencies attested in MPTs that are more characteristic of the language of the original than of the language of the translated text (i.e. literal translation from the original). Third, there is no guarantee that all translations have been made from one and the same original written in one and the same language. Far more frequently than not, translations are made from other translations of the original, because it is easier to translate from a language genealogically close to the target language than from the genealogically unrelated source language, or because there is a lack of qualified translators working in the source language. Finally, because translated MPTs are written in many different languages, the researcher should be familiar with different writing systems—or even different orthographies if MPTs are written in obsolete or archaic orthographies. Otherwise, the researcher may have to hire people who can read unfamiliar writing systems (and also transcribe relevant passages into IPA symbols). In 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

addition, MPTs are not glossed morpheme by morpheme; some linguistic analysis may have to be carried out. When the number of languages involved is large, this preliminary work can turn out to be time-consuming and costly.

6.4 Online typological databases With the wide availability of computer technology, recent years have witnessed the emergence of online typological databases (e.g. see Everaert et al. ). Some of the advantages of online databases are glaringly obvious. Most online typological databases are readily available to the public—access to certain databases may require permission from their creators/owners. They can be easily accessed by anyone from anywhere at any time provided, of course, that one has access to the Internet. Moreover, online typological databases can be updated or improved with new data or analyses, corrections, modifications, and elaborations as long as the sites are supported by their host organizations. Online typological databases may vary in terms of depth or breadth of coverage. Thus, some databases cover a large number of languages but the data that they provide may be restricted to selected structural properties, while others deal with a small number of languages, providing detailed data for each language. Yet, some databases concentrate on a small number of structural properties while others cover a wide range of properties. Some online typological databases may even allow the user to manipulate typological variables (i.e. structural properties) in different or novel combinations. This way, it may be possible to ascertain whether hitherto unknown correlations or associations may exist between individual structural properties already investigated independently of each other. However, this kind of research should be carried out with a great deal of caution because different language samples—probably genealogically and/or areally biased—may have been used for the investigation of different structural properties, even within the same database; the genealogical and/or areal distribution of sample languages may not be uniform or consistent across structural properties examined together for possible correlations. Thus, structural correlations ‘uncovered’ in this manner may well turn out to be specious or accidental due to genealogical and/or areal bias across the sample languages compared (see Dryer  for some 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6. 4 O NL I N E TY P O L O G I C AL D AT A B A S E S

pitfalls of the method of testing correlations based on a typological database). Some of the well-known, useful online databases are: (i) Typological Database System Project (at languagelink.let.uu. nl/tds/index.html): a web-based service that provides integrated access to a collection of independently developed online typological databases; (ii) The World Atlas of Language Structures Online or WALS Online (at wals.info): the online, expanded version (third and final edition, ) of Haspelmath et al. (), with data and distributional world maps in terms of  properties, updated whenever corrections or additions are made; the number of languages surveyed varies from one structural property to another, however; (iii) AUTOTYP (at www.spw.uzh.ch/autotyp): an online database established and administered by two individual linguists; only subsets of the data are publicly available, with some of their research findings included in WALS (Online); (iv) Syntactic Structures of the World’s Languages (at sswl.railsplayground.net): an open-access, searchable database that allows the user to discover which of the selected morphological, syntactic, and semantic properties are to be found in a language and also how these properties may relate across languages; (v) Jazyki mira ‘Languages of the World’ (at http://www.dblang. ru/en/Default.aspx): an open-access database established by a group of Russian linguists, with data in  languages in terms of a good number of structural properties including phonological ones, with an added function of comparing any two of the listed languages at a time;4 (vi) Surrey Morphology Group Databases (at http://www.surrey. ac.uk/englishandlanguages/research/smg/webresources/index. htm): multiple databases with differently sized language samples,

4 Polyakov et al. (: ) say that Jazyki mira contains data for  languages but the physical website provides data for  languages only. Valery Solovyev (personal communication) explains that this discrepancy is due to the limited capability of the web version of Jazyki mira, although the CD version contains the full range of data and search functions, adding that plans to make the whole package available for downloading are under way.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

(vii)

(viii)

(ix)

(x)

(xi)

dealing largely with morphological properties such as agreement or suppletion; Atlas of Pidgin and Creole Language Structures Online or APiCS Online (at apics-online.info): the online version of Michaelis et al. (), with data and distributional world maps on  pidgin and creole languages in terms of  structural properties; also designed to allow comparison with data from its sister database WALS Online; Summer Institute of Linguistics (SIL) Language Data (at http:// www.sil.org/linguistics/language-data): an open-access site maintained by SIL, one of the major Christian organizations, with links to its own sites in different countries; while its primary objective is Bible translation, SIL also produces grammatical descriptions of languages, many of which are spoken in some of the remotest parts of the world; UCLA Phonological Segment Inventory Database (at http:// www.linguistics.ucla.edu/faciliti/sales/software.htm; also at web. phonetik.uni-frankfurt.de/upsid.html): a somewhat ‘orphaned’ database on the phonological systems of  languages, no longer maintained by its original creators; PHOIBLE Online (at http://phoible.org): an online repository of cross-linguistic phonological inventory data, extracted from grammatical descriptions and tertiary databases and compiled into a single searchable convenience sample; the  edition consists of , inventories containing , segment types attested in , languages; and StressTyp (at http://st.ullet.net): an online database maintained by three linguists, with data on stress and accent patterns in over  languages, with nearly all language families represented.

While they are useful and readily available to whoever wishes to make use of them, we need to bear in mind that online typological databases have been constructed for specific research agendas, which may not necessarily coincide with our own. Thus, if we wish to investigate structural properties that have so far received little or no attention or possible correlations involving such properties, we are not likely to find relevant data or information from existing online typological databases. Thus, online typological databases may probably have 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6. 5 N A T I V E S P E A K E R E L I C I T A T I O N : D I R E C T O R I N D I R E C T

somewhat limited usefulness for purposes of new or innovative projects. Nonetheless, online typological databases typically provide other useful data, including genealogical, geographical, or even sociolinguistic information as well as the bibliographical details of relevant references, including grammars; they will most likely reduce considerably the amount of groundwork that goes into selecting sample languages, or searching and reviewing the primary literatures on individual languages.

6.5 Native speaker elicitation: direct or indirect Linguistic typologists may sometimes work with native speakers, typically in conjunction with other methods of collecting data, e.g. use of grammars and texts. This method of collecting data may sound like the best method and it is indeed the primary method used in linguistics. However, it poses an enormous amount of practical difficulty for the purposes of typological research, as discussed in §.. Even if we could somehow manage to find a native speaker for every language in our sample, it would be unwise to work with only one native speaker per language. It is well known in social science research that: [o]ften people who make themselves most readily available to an outsider are those who are marginal to the community, and may thus convey inaccurate or incomplete information and interfere with the acceptance of the researcher by other members of the group. (Saville-Troike : )

We may actually need to work with more than one native speaker for each language for the sake of verification or confirmation. Indeed, working with native speakers in typological research involves the same host of problems encountered in any research involving (live) human subjects. For instance, as is also well known to social scientists, people may often answer questions in such a way as to please the researcher (the ‘courtesy bias’) or deliberately mislead the researcher by offering inaccurate or incomplete information (the ‘sucker-bias’) (Saville-Troike : ). Finding more than one reliable native speaker, not just native speakers, for a large number of languages will no doubt be a herculean task. In view of the impracticality of relying exclusively on this direct method in linguistic typology, we may wish to keep it as a last resort, especially when subtle grammatical points in certain sample languages call for clarification or further analysis. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

A variation on native speaker elicitation may be to collect data by proxy. The researcher can ask linguists or language specialists to consult native speakers on her behalf—of course, some of these people may be native speakers of sample languages themselves. For instance, in the WALS (Haspelmath et al. ), eighty-five linguists or language specialists—some of whom were probably native speakers of the languages that they ‘represented’—were recruited in addition to the use of a -language sample and a further sample of  additional languages so that the contributors to the WALS would be able to call upon their expertise if and when needed—linguistic experts on call, as it were. Linguists or language specialists approached this way may be able to provide data without any delay, having already investigated the structural property in question themselves. When collecting data from native speakers or linguists/language specialists, it may also be possible to make use of a questionnaire. A set of written questions about the grammatical phenomenon to be studied can be sent out to linguists, language specialists, or native speakers. A sample question taken from Dahl’s () questionnairebased study of tense, aspect, and mood categories is provided in Figure .. The context given for the question can be linguistic or nonlinguistic (the latter in Figure .), and is provided before the sentence to be translated. Also, the predicate, i.e. the verb or the combination of the verb and the associated adjective, is capitalized and given in infinitive

No. 1

ANALYSIS

CONTEXT Standing in front of a house

Sentence to be translated (omit material with parentheses) The house BE BIG

Same translation as sentence No.

TRANSLATION:

Figure 6.1 A sample question (adapted from Dahl : )



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6. 5 N A T I V E S P E A K E R E L I C I T A T I O N : D I R E C T O R I N D I R E C T

form (e.g. ‘BE BIG’) in order to ‘minimize influence from English when translating’, that is, to ‘avoid literal translations of English [tense, aspect, and mood] categories’ (Dahl : ). Sometimes, further phrases or words may be added to the sentence to be translated for a more precise interpretation, and these additional elements are not to be translated (i.e. ‘omit material with parentheses’ in Figure .). One of the major advantages of using a questionnaire is that the researcher can potentially collect high-quality data from a wide range of languages by asking questions that cannot be answered by reading grammars. The researcher can indeed design a questionnaire in such a way that specific, detailed questions can be asked about the structural property of interest. Despite its advantages, this method of data collection has its own share of difficulties too. For one thing, it may be time-consuming and costly to design and implement a questionnaire, especially when the researcher is dealing with a large language sample on a small budget. However, with the advent of electronic technology (e.g. e-mail, online survey sites such as FluidSurveys and SurveyMonkey, or electronic discussion lists such as the Linguist List and the Association for Linguistic Typology List, to which a large number of linguists around the world subscribe),5 execution of a questionnaire may no longer be as time-consuming and costly as it used to be. Through an electronic network we can instantly get in touch with a number of linguists or language specialists—provided that they are willing enough to respond to our request and complete electronic questionnaires (within a reasonable span of time). However, there may still be problems with carrying out questionnaires. Often the reliability or credibility of respondents may be in doubt or in need of confirmation. Of course, this problem can be assuaged if qualified respondents are first selected by means of careful planning, screening, etc., and if only those who pass muster (e.g. with their credentials checked) are then approached. Even if only qualified consultants have been identified and recruited, which languages to include in the investigation may eventually have to depend more on the availability of qualified respondents than on a statistically based sampling method. In other words, one is very likely to end up with a convenience sample rather than a statistically constructed sample (see Chapter ). 5

The Linguist List and its related resources are accessible at http://linguistlist.org, and the ALT List and its resources are also available at www.linguistic-typology.org.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

There is probably no greater potential problem with using a questionnaire than not having a ‘perfect’ questionnaire in the first place. The importance of designing a ‘perfect’ questionnaire cannot be overemphasized. There are two reasons for this. First, while the researcher knows what she wishes to find out by administering a questionnaire, those who complete the questionnaire may not necessarily understand what the researcher wishes to find out. Thus, it is vitally important to create the best possible questionnaire by means of clear, precise, and unambiguous instructions and/or questions. This is even more crucial when the questionnaire makes reference to grammatical categories (in the case of a questionnaire sent out to linguists or language specialists). For instance, the researcher must explain clearly and precisely what she means by grammatical categories such as subject. Subject may be defined in different ways, e.g. syntactically, semantically, or pragmatically. The definitions of structural properties and other properties used in those definitions must be presented as clearly and precisely as possible. Second, once the questionnaire has been returned, there is most probably no second opportunity to seek further information. Sending out a second round of questions, especially to some of the respondents, is not an option, as this will certainly distort the integrity of the whole project. It goes without saying that the questionnaire must include everything that the researcher wants to ask. Thus, the major problem with questionnaires (i.e. poorly designed questionnaires) can be avoided at the very initial stage of a project; a sufficient amount of time, effort, and resources must be spent on the designing of a questionnaire. There are also other issues that need attention when creating a questionnaire. While it is important to formulate culturally neutral questions, it may not always be possible to do so, given that a large number of languages (hence a large number of cultures) are involved in typological research. One poignant example is provided in Dahl (: ). When asked to answer questions relating to the use of the verb kú ‘die’ in Isekiri (Defoid; Niger-Congo: Nigeria), Dahl’s language consultant replied that ‘the king does not die [but] he joins his ancestors’. Nonetheless, it is important to keep questions as culturally neutral as possible. To that end, Dahl (: ) allowed his respondents to substitute problematic words for more relevant ones. Moreover, it may be important not to ask more questions than is absolutely necessary. The researcher should, of course, ask all questions that need to be asked 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6. 5 N A T I V E S P E A K E R E L I C I T A T I O N : D I R E C T O R I N D I R E C T

but also be mindful of the fact that asking too many questions is not a sensible thing to do. Respondents are known to suffer from so-called ‘questionnaire fatigue’, and we cannot afford to expect linguists or language specialists to be any different from ordinary people when answering a questionnaire (also Dahl : ). Anyone who has completed a questionnaire will know well how quickly initial enthusiasm for completing a questionnaire runs out. Finally, it may pay to give some thought to the ordering of questions. Dahl (: ), for instance, grouped together similar questionnaire sentences or one and the same questionnaire sentence used in different contexts. However, some of his respondents pointed to the possibility that some people might have been led to try ‘to differentiate contiguous sentences whenever possible’ when this was clearly not the researcher’s intention. Moreover, when ordering questions, it may be useful to place easy, simple questions before difficult, complex, or detailed ones so that respondents can gradually familiarize themselves with what is being investigated. The foregoing suggests that the use of a questionnaire in linguistic typology, as in other disciplines, requires a large amount of careful planning, designing, and screening. Thus, it is a good idea to trial a questionnaire by using a small number of respondents—for a small number of languages—in order to ascertain whether it does what is designed to do. Doing a pilot test can potentially save a lot of time, effort, and resources further down the track because it helps the researcher identify and eliminate any mistakes, flaws, problems, or ambiguities in the questionnaire. This way, the questionnaire can be fine-tuned once or more prior to being distributed to all respondents in final form. In this context, it is worth mentioning briefly the Leningrad Typology Group (or LTG hereafter), whose heavy use of questionnaires is legendary within linguistic typology circles (e.g. Song : –). The LTG was set up in the early s in the Institute of Linguistics at the USSR (now Russian) Academy of Sciences in Leningrad (now St Petersburg). From its inception, the LTG was conceived of as a team made up of a leader, theorists, and specialists, all working together on projects. In the LTG, the leader first decides on projects, which are then discussed thoroughly by the leader and the theorists in terms of theoretical issues, literature review, relevant facts to be covered in language specialists’ reports and the like. This discussion, in turn, forms the theoretical foundation of a questionnaire, which 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

serves as the bounds within which the language specialists are expected to describe languages of their specialization.

6.6 Levels of measurement and coding When structural properties (or variables) are investigated, languages may differ with respect to the values that they are to be assigned. For instance, in terms of the variable of basic word order some languages are given the value of SOV, some the value of SVO, and so on. What kind of value is to be given to languages depends on the kind of variable or structural property. This is referred to technically as the level of measurement. There are four levels of measurement: (i) categorical (or nominal) variable; (ii) ordinal variable; (iii) interval variable; and (iv) ratio variable. Most typological properties are categorical variables. That is, each language is put into one particular category. For instance, Bakker (: ) points out that the vast majority of the variables investigated in the WALS (Haspelmath et al. ) are categorical variables. The basic word order variable mentioned above is of this kind: languages are put into one particular word order type, be it SOV, SVO, VOS, VSO, OVS, OSV, or none (i.e. no dominant word order). Another example of this kind of variable is the presence or absence of articles. Languages either possess articles or lack articles. Some variables can be ordinal in that data can be put into a ranking system. For instance, Maddieson (a) surveys the world’s languages in terms of the size of the set of consonants. Thus, languages are ranked from small (– consonants), moderately small (– consonants), average (– consonants), moderately large (– consonants), to large ( or more consonants). Interval variables are similar to ordinal ones with one difference: the values in an interval variable must have the same distance from each other. Bakker (: ) mentions as an interval variable the size of basic colour, as defined by Kay and Maffi (). For instance, a language that has six basic colours makes twice as many basiccolour distinctions as a language that has three basic colours. These kinds of variables are probably few and far between in languages. Finally, the last kind, a ratio variable, is akin to an interval variable with one difference: a ratio variable has a natural zero point. A zero point means that there is no measurable amount of the variable being measured. This kind of variable is probably very rare among structural properties. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6. 6 L E V E L S O F M E A S U R E M E N T A N D C O D I N G

One possible ratio variable may be something like the frequency of a linguistic expression or a structural property in MPTs. In §., Le petit prince was discussed as an MPT that has previously been used to investigate the frequency of the target language equivalent of avec ‘with’ in the French original. In this kind of study, there will, in principle, be equal distances between two (theoretically possible) adjacent frequencies (e.g.  vs  or  vs  instances of the target language equivalent of avec) with a (theoretically possible) zero point (i.e. no instance of the equivalent of avec in the MPT translated into language X). Generally speaking, as one moves from ratio to interval to ordinal to categorical variables, the use of statistical tools becomes more and more limited. This is not unexpected because the four different kinds of variable differ in terms of quantitative manipulation, with categorical variables the least, and ratio variables the most, susceptible to being numerically rendered. Once they have been collected and analysed, data need to be entered into some kind of coherent system, following the typology that has emerged from the analysis (see Chapter ); this involves a coding procedure, whereby the data are represented in a uniform manner for purposes of cross-linguistic comparison. Typically, some form of data matrix is used so that all sample languages are lined up, in alphabetical order, with their respective values for a given variable. For instance, for categorical variables such as basic word order, the data matrix, with additional genealogical and areal information, will look like Table .. Table 6.1 Data matrix for basic word order at the clausal level Language

Basic WO

Family: Genus

Macroarea

SOV

Indo-European: Indic

Eurasia

[...] Torwali Tsimshian

VSO

Penutian: Tsimshianic

North America

Tubu

SOV

Nilo-Saharan: Western Saharan

Africa

Tukang Besi

VOS

Austronesian: Celebic

Oceania

Tuyuka

SOV

Tucanoan: Tucanoan

South America

[...]

The values for basic word order can, of course, be coded numerically, e.g.  for SOV,  for SVO,  for VOS, but mnemonic or meaningful labels such as SOV may be more suitable for typological investigation as 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

they can not only prevent coding mistakes but also make data and analysis more readable and accessible (especially to outsiders). Converting meaningful labels such as SOV into numerical labels is just another unnecessary process that can potentially give rise to coding mistakes. Furthermore, another column may be added to the data matrix (in Table .) in order to record the bibliographical source of each data entry for future checking or reference (e.g. Lunsford :  for SOV in Torwali). Some structural properties may call for a more complicated data matrix, with multiple variables. Bakker (: –) gives one such example, taken from Siewierska and Bakker (), as in Table .. Table 6.2 Data matrix for subject agreement LG_NAME

S_AGR_PRS

S_AGR_NUM

S_AGR_GND

S_AGR_INEX

Abipon



sgpl

NO

Unified We

Abkhaz



sgpl

sg

Unified We

Abun

IRR

IRR

IRR

IRR

[...]

The data matrix in Table . contains the four variables that capture the way the subject is marked on the verb. For instance, in Abkhaz the subject (i.e. S) agrees with the verb for all three PeRSons (i.e. ), in the singular and plural (i.e. sgpl) for NUMber and in the second and third person singular for GeNDer (i.e. sg), and the first person plural agreement makes no distinction between INclusive and EXclusive (i.e. Unified We). Abun is coded as ‘IRR(elevant)’ for all the variables because this language does not mark the subject on the verb at all. This ‘IRR’ situation contrasts with ‘NO’ for S_AGR_GND in Abipon, which marks the subject on the verb for person and number but not for gender. Note that all the coding labels in Table . also are mnemonic, and anyone familiar with glossing abbreviations should have little trouble understanding what they stand for. When data are entered into a data matrix, it may also be a good idea to have someone other than the researcher check whether they are coded correctly, and even whether values for each sample language (or at least for randomly selected languages) are identified correctly. This should not be a problem if the definition of each value for a given variable is clearly formulated and laid out for purposes of verification or even replication. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

6. 7 C O N C L U D I N G R E M A R K S

This may indeed be how research is done when linguistic typologists work together on the same project. Two or more researchers can independently analyse and code the same data by using the set of definitions and the coding procedure that they have agreed on. The added advantage of working together this way, apart from checking the quality of coding itself, is that, while checking is being done, some of the original definitions may have to be refined, two variables may be collapsed into a single variable, or further variables may need to be recognized—all for better or more reliable results—because of emerging differences in analysis or coding between the co-researchers.

6.7 Concluding remarks More for practical reasons than anything else, linguistic typologists tend to work with grammars or grammatical descriptions. When in doubt, they may need to seek assistance from other linguists, language specialists, or native speakers, access other sources of data (e.g. online typological databases, MPTs), or draw upon questionnaires. But, as noted in §., the use of grammars is not without problems. Thus, Croft (: ) is led to declare that ‘the [linguistic] typologist has to rely on faith in the qualities of the materials [read: grammars or grammatical descriptions] at hand’, and that ‘most of those materials do not inspire faith’. Inspire faith they may not, but the situation may not be as hopeless as Croft makes it out to be. One of the virtues of working with a large number of languages is that it does to a certain extent offset the problem of faith in the qualities of grammars. If a certain structural property occurs in language after language [read: in grammar after grammar], we can be reasonably confident that this is a real phenomenon to be identified or at least worth investigating as such. To put it differently, grammars may fail to inspire faith individually but they may inspire faith collectively.

Study questions 1. Read Stassen’s definition of comparatives and related discussion in §. and try to create a short questionnaire (e.g. constructing ten to fifteen sentences to be translated into target languages) that can



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

DATA COLLECTION

potentially be sent to linguists, language specialists, or native speakers for data collection. The definition of comparison utilized by Stassen (: ) reads: a construction counts as a comparative construction (and will therefore be taken into account in the typology), if that construction has the semantic function of assigning a graded (i.e. non-identical) position on a predicative scale to two (possibly complex) objects.

2.

3.

4.

5.

Discuss any problems that you may envisage in constructing and administering such a questionnaire. Find native speakers of three to five languages other than English (e.g. international students in your other classes) and ask them to complete the questionnaire that you have created for question . Review the collected data and see whether they are the kind of data that Stassen’s definition of comparatives intends to capture. Discuss whether the original questionnaire needs improvement, and if so, what kinds of improvement are necessary. Find out about the typology of comparatives presented in Stassen () or in Stassen () and construct an appropriate data matrix. What variables should be included in that data matrix? Should there also be sub-variables? Enter the data collected for question  into the data matrix. Consult available grammars of the languages that you have collected data from for question ; find out whether they contain any discussion on comparative constructions (as defined in question ); and compare the data from the grammars with your own. Based on these two different sets of data, do you think that you will still be able to say that the languages have the same type of comparative construction? If not, why does the discrepancy arise? Visit the WALS Online and Jazyki mira sites and compare their search and other functions in terms of similarities and differences (see Polyakov et al. ).

Further reading Bakker, P. (). ‘Appendix: A Short Note on Data Collection and Representation in Linguistic Typology’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Cysouw, M. and Wälchli, B. (). ‘Parallel Texts: Using Translational Equivalents in Linguistic Typology’, Sprachtypologie & Universalienforschung : –. Dahl, Ö. (). Tense and Aspect Systems. Oxford: Basil Blackwell, ch. . Everaert, M., Musgrave, S., and Dimitriadis, A. (eds.) (). The Use of Databases in Cross-Linguistic Studies. Berlin: Mouton de Gruyter.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

FURTHER READING

Rasinger, S. M. (). Quantitative Research in Linguistics: An Introduction. nd edn. London: Bloomsbury. Saville-Troike, M. (). The Ethnography of Communication. rd edn. Oxford: Blackwell, pp. –. Wray, A. and Bloomer, A. (). Projects in Linguistics and Language Studies. rd edn. London: Routledge, ch. .



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7 Typological asymmetry Economy, iconicity, and frequency

7.1 Introduction



7.2 Typological asymmetry



7.3 Economy and iconicity (in competition)



7.4 Typological asymmetry = frequency asymmetry?: iconicity vs frequency



7.5 Concluding remarks



7.1 Introduction This chapter and the next explore some of the important concepts in linguistic typology: typological asymmetry, categories, prototypes, implicational hierarchies, and semantic maps. These concepts are important because they help us make (better) sense of what we have discovered about the nature of human language. They are also important in that they motivate us to identify structural properties that may be susceptible to similar explanation or to re-evaluate structural properties that have previously been explained differently. They may even show us where to look for other such structural properties. This chapter focuses on typological asymmetry. The concept of typological asymmetry can be illustrated by the distribution of oral and nasal vowels in the world’s (spoken) languages. There are languages with oral vowels only (e.g. English (Germanic; Indo-European: UK), Kiribati

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7.1 I N T R O D U C T I O N

(Oceanic; Austronesian: Kiribati), Swahili (Bantoid; Niger-Congo: Tanzania) as well as languages with both oral and nasal vowels (e.g. French (Romance; Indo-European: France), Hindi (Indic; Indo-European: India), Seneca (Northern Iroquoian; Iroquoian: US), but no languages are known to have nasal vowels only (Hajek ). (Needless to say, there are also no (spoken) languages that lack both oral and nasal vowels.) In other words, oral vowels may occur with or without nasal ones, but nasal vowels can only occur in conjunction with oral ones. This typological asymmetry is summarized in () (particularly (b) as opposed to (c)): () a. b. c. d.

Oral Vowels & Nasal Vowels: Oral Vowels & No Nasal Vowels: No Oral Vowels & Nasal Vowels: No Oral Vowels & No Nasal Vowels:

Attested? Yes Yes No N/A

When accounting for this kind of typological asymmetry, linguistic typologists typically invoke the concept of markedness. Thus, the asymmetrical distribution of oral and nasal vowels is explained by saying that nasal vowels are ‘marked’ (i.e. having the mark of nasality), and oral vowels ‘unmarked’ (i.e. lacking the mark of nasality). To put it differently, nasality is a marked feature in vowels. The markedness of nasality, in turn, explains why nasal vowels have a restricted distribution in comparison with their unmarked, oral counterparts. In linguistic typology (as well as in other linguistic theories such as Optimality Theory (e.g. McCarthy ; also §.)), the concept of markedness has long been used to account for a wide range of asymmetries attested in phonology, morphology, syntax, semantics, and the lexicon (see Greenberg’s  [] classic work on markedness). This concept owes its origins to the theory of language developed by the Prague School of Linguistics in the s. Originally, Prague School linguists (e.g. Nikolai Trubetzkoy (–)) invoked markedness as a binary relation in order to account for various contrasts in the phonological systems of particular languages. For instance, take the voicing contrast between /t/ and /d/. These alveolar plosives share a number of articulatory properties but they differ in terms of voicing. Thus, the difference between /t/ and /d/ can be understood in terms of the presence or absence of voicing. In Prague School’s theory, this voicing contrast may be used to make a distinction between unmarked



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

and marked in languages such as German (Germanic; Indo-European: Germany): /t/ is regarded as unmarked as it lacks the voicing feature while /d/ is regarded as marked as it contains the voicing feature. The marked status of the presence of voicing is demonstrated by the fact that in German the voicing contrast of the final obstruent—specifically the obstruent in the syllable-final or coda position—is neutralized so that it must always be voiceless. In other words, when voicing is irrelevant in the syllable-final position, it is the unmarked (=voiceless) member of the pair, e.g. /t/, that is chosen to the exclusion of the marked (=voiced) member, e.g. /d/. Thus, in German Rat /ra:t/ ‘advice’ and Rad /ra:d/ ‘wheel’ are both realized phonetically as [ra:t]. (Note that voicing in German obstruents resurfaces when the words in question host a suffix, as in Räte [rɛ:tə] ‘advices’ vs Räder [rɛ:dər] ‘wheels’.) Subsequently, the concept of markedness was extended to other areas of language, i.e. grammatical, semantic, and lexical categories, through the work of another prominent Prague School linguist, Roman Jakobson (–). For instance, imperfective aspect is regarded as unmarked, because it does not express whether a given event is complete or not, while perfective aspect is taken to be marked, because it concerns ‘the absolute completion’ of an event (Jakobson  []: ). The present and the past can be analysed similarly as unmarked and marked, respectively, because the past indicates the sequence of two events, namely the narrated event preceding the speech event, ‘while the present implies no [such] sequence’ (Jakobson  []: ). Jakobson also extended the scope of markedness from the level of individual languages to that of cross-linguistic observation with the effect that unmarked properties are attested more frequently in the world’s languages than marked counterparts. This cross-linguistic extension can be illustrated by the markedness relationship between oral and nasal vowels, i.e. the former vowels as unmarked and the latter as marked. Hajek (), for instance, observes that in his sample,  languages have nasal vowels (in addition to oral vowels, of course), while  languages do not have nasal vowels. Thus, oral vowels are far more frequently attested in the world’s languages than nasal vowels are. Moreover, Hajek notes that in languages with oral and nasal vowels, the number of nasal vowels tends to be smaller than that of oral vowels. This disparity in number is attested in approximately % of his sample. In point of fact, there seem to be virtually no exceptions to the earlier claim (Ferguson ; Maddieson : –) that the number of nasal vowels is either the 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7.1 I N T R O D U C T I O N

same as, or smaller than, that of oral vowels.1 Through the work of Joseph Greenberg in the s (e.g. Greenberg  [] in particular), the concept of markedness began to be systematically adapted to the study of language universals or universal preferences. In other words, instead of being confined to particular language systems, the concept of markedness is now shown to have cross-linguistic relevance. That is to say that the contrast between unmarked and marked manifests itself in all languages. Owing to this new perspective, the contrast between unmarked and marked can now be developed into language universals or universal preferences, expressed in the form of ‘if p, then q’ (see §.). For instance, the cross-linguistic tendency for nasal vowels never to exceed oral vowels in number entails that nasal vowels have fewer height/backness distinctions than oral vowels. This tendency, in turn, leads to the formulation of a language universal to the effect that vowel height/backness distinctions in nasal vowels imply their presence in oral vowels but not necessarily the converse (Greenberg  []: ); but cf. Hajek ). Moreover, in Greenberg’s work, markedness can now be thought of as a scale of a feature, not just a binary relation between unmarked and marked. That is to say, that markedness can also be a matter of degree. For instance, number distinctions are interpreted as such a scale: the plural more marked than the singular, and the dual more marked than the plural. Thus, an implicational hierarchy has been proposed to the effect that if a language has a more marked number distinction (e.g. dual), it will also have a less marked number distinction (e.g. plural and singular) but not necessarily the converse (e.g. dual but no plural) (Greenberg b: ; cf. Corbett : –). From this, can it also be predicted that there are more languages with less marked number distinctions (e.g. plural) than languages with more marked number distinctions (e.g. dual)? Furthermore, Greenberg (b: –) recognizes markedness relations across different categories, converting them into implicational statements. For instance, gender is regarded as more marked than number to the effect that if a language has the category

There seems to be one exception to this claim: Koyra Chiini (Hajek ). In this NiloSaharan language, there is one nasal vowel that lacks an oral counterpart, namely the front low nasal vowel /æ ˜ /. Note, however, that this exception has been brought about by the adoption of the nasal vowel in French loanwords (Hajek ). 1



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

of gender, it always has the category of number as well. The same markedness relation is also observed in agreement systems so that an additional implicational statement can be formulated: if the verb agrees with a nominal subject or object in gender (i.e. more marked), it agrees with a nominal subject or object in number (i.e. less marked) as well. Note that these implicational statements are based on the typological asymmetry between number and gender: there are languages that have the category of number only, languages that have both the category of number and the category of gender, or languages that have neither of the categories but there are no languages that have the category of gender only, lacking the category of number. This typological asymmetry across the two disparate categories can be summarized: () a. b. c. d.

Number & Gender: Number & No Gender: Gender & No Number: No Number & No Gender:

Attested? Yes Yes No Yes

7.2 Typological asymmetry Typological asymmetry can be characterized in terms of two main parameters: (i) formal coding, and (ii) grammatical behaviour. Formal coding describes how the values of a grammatical category are expressed formally, e.g. some values exhibiting overt coding and others lacking overt coding. Grammatical behaviour concerns morphological distinctions to be made for a grammatical category as well as grammatical contexts in which a category may appear, e.g. unmarked values showing more morphological distinctions or occurring in more grammatical contexts than marked values. 7.2.1 Formal coding Some instances of typological asymmetry involve differential formal coding: more marked values of a category are formally coded at least as much as less marked values of that category. In other words, 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 2 T Y P O L O G I C A L A S Y M M E T R Y

formal coding is asymmetrically distributed—within and across languages—in favour of marked over unmarked values if and when they are coded differentially. For instance, take the grammatical category of number, mentioned in §.. In Turkish, the singular form (i.e. less marked) is expressed without any number coding (e.g. zero in (a)), while the plural form (i.e. more marked) is overtly coded for number (e.g. -lar in (b)). ()

Turkish (Turkic; Altaic: Turkey) a. çocuk b. çocuk-lar child child-PL ‘(a) child’ ‘children’ c. kitap d. kitap-lar book book-PL ‘(a) book’ ‘books’

The same situation can be observed in Imbabura Quechua or in Niuean, where the plural form is overtly coded for number by -kuna or tau, respectively, while the singular form appears without any overt number coding as such: ()

Imbabura Quechua (Quechuan: Ecuador) a. runa b. runa-kuna man man-PL ‘(a) man’ ‘men’ c. wasi d. wasi-kuna house house-PL ‘(a) house’ ‘houses’

()

Niuean (Oceanic; Austronesian: New Zealand and Niue) a. e faiaoga b. e tau faiaoga ABS teacher ABS PL teacher ‘(the) teacher’ ‘(the) teachers’ c. e fua loku d. e tau fua loku ABS fruit papaya ABS PL fruit papaya ‘(the) papaya’ ‘(the) papayas’

In addition, there are languages that code the singular as well as plural form overtly, as in () and (), and languages that do not code number at all, be it the singular or plural (i.e. zero coding), as in () and (). 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

() Bayso (Eastern Cushitic; Afro-Asiatic: Ethiopia)2 a. lubán-titi b. luban-jool lion-SG lion-PL ‘(a) lion’ ‘lions’ () Banyun (Northern Atlantic; Niger-Congo: Senegal and Guinea Bissau) a. bu-sumɔl b. i-sumɔl SG-snake PL-snake ‘(a) snake’ ‘snakes’ () Pirahã (Muran: Brazil) a. xipóihí b. baaí ‘(a) woman/women’ ‘(a) pig/pigs’ () Chrau (Bahnaric; Austro-Asiatic: Vietnam) a. con b. chho’ ‘(a) child/children’ ‘(a) tree/trees’ Where the unmarked and marked values of number are coded differentially—as in Turkish (), Imbabura Quechua (), and Niuean ()—it is the plural that is overtly coded while the singular appears without overt number coding. The converse situation, i.e. non-zero coding for the singular and zero coding for the plural, is said to be unattested.3 The distribution of formal coding of number, i.e. the singular and the plural, in the world’s languages can then be summarized:

Bayso also codes the paucal value, e.g. luban-jaa ‘a few lions (lion-PAUCAL)’ (Corbett : ). 3 Some languages may code the singular form overtly for number while not coding the plural form at all (i.e. zero coding). For instance, in Welsh the singular form plu-en ‘feather’ contains the number suffix -en, while the plural form, i.e. plu ‘feathers’, does not host a number morpheme. The overtly coded singular form is referred to as ‘singulative’, while the zero-coded plural form is known as ‘collective’. This unusual distribution of coding in the singulative vs the collective is known as ‘markedness reversal’ in the literature (Tiersma ). The received explanation for this particular example of markedness reversal is that there are objects that tend to occur in clusters or groups. Thus, we are more likely to observe objects such as a feather to appear in a cluster rather than as an individual entity. This characteristic, in turn, is reflected in the presence of the number suffix in the singular form plu-en (i.e. less likely or marked), and the absence of the number suffix in the plural form plu (i.e. more likely or unmarked). 2



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 2 T Y P O L O G I C A L A S Y M M E T R Y

()

Attested? a. Singular–Zero & Plural–Non-zero: Yes (e.g. Turkish) b. Singular–Non-Zero & Plural–Zero: No c. Singular–Non-Zero & Plural–Non-Zero: Yes (e.g. Bayso) d. Singular–Zero & Plural–Zero: Yes (e.g. Pirahã)

Thus, there is a typological asymmetry in terms of formal coding in the singular and the plural to the effect that (a) is attested while (b) is not. (The other two permutations, i.e. (c) and (d), are taken out of consideration here because there is no differential coding involved in them.) Based on the typological distribution in (), a language universal can be formulated to the effect that if a language codes the singular overtly, it must also code the plural overtly, but the converse—if a language codes the plural overtly, it must also code the singular overtly—is not the case. This kind of typological asymmetry, that is, in terms of non-zero vs zero coding, is also observed in derivational categories, one example of which is the asymmetric coding relationship between the basic verb and the corresponding causative verb (Greenberg  []: –). The basic verb (i.e. the unmarked member of the basic–causative pair) lacks the meaning of causation, while the derived causative verb (i.e. the marked member of the basic–causative pair) contains the meaning of causation. Logically speaking, the absence of the causative meaning should be coded as overtly as the presence of that meaning, or the basic verb should be overtly coded for the absence of the causative meaning, with the causative verb left uncoded for the presence of the causative meaning. In the majority of the world’s languages, however, the causative verb invariably has an overt expression of causation, while the basic verb lacks any overt coding for the absence of causation, as is the case in Korean, Turkish, and Bilaan (but see §. for further discussion). () Korean (isolate: Korea) a. nokb. nok-imelt melt-CAUS‘(X) melt’ ‘cause (X) to melt’ () Turkish (Turkic; Altaic: Turkey) a. koşb. koş-turrun run-CAUS‘run’ ‘cause to run’ 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

()

Bilaan (Bilic; Austronesian: Philippines) a. alob b. f-alob wash CAUS-wash ‘wash’ ‘cause to wash’

In other words, there is a typological asymmetry in the sense that one member of the basic–causative pair is coded overtly when the other member is zero-coded. A further example of this kind of typological asymmetry comes from tense coding. Recall that the present is regarded as unmarked and the past as marked. For instance, in Khasi (Khasian; Austro-Asiatic: India) the past has an auxiliary while the present does not (Greenberg  []: ). Also consider the non-zero past tense coding -ed, as opposed to zero present tense coding in English (e.g. present walk vs past walked). Again, it is the unmarked value of the tense category that lacks overt coding, with the marked value overtly coded for tense. Typological asymmetry may also involve relative degrees of coding, i.e. more vs less coding, instead of non-zero vs zero coding. For instance, it is well known that in the world’s languages the positive, comparative, and superlative degrees of adjectives show a gradual increase in the amount of overt coding: the positive, the least coded (typically zero coding), the superlative, the most, and the comparative in between, e.g. English big, bigger, biggest or Hungarian (Ugric; Uralic: Hungary) jó ‘good’, jobb ‘better’, leg-jobb ‘best’ (Greenberg  []: ). We can say that the positive is less marked than the comparative, which, in turn, is less marked than the superlative, because the quantity of meaning, as it were, increases from the positive to the comparative to the superlative. The positive meaning does not presuppose the comparative or the superlative meaning, while the comparative or the superlative meaning presupposes the positive meaning. If we say that X is big, we do not necessarily assume that X is bigger than something else or X is the biggest of all; we are merely saying that X is not small. However, if we say that X is bigger than Y, we think that X and Y each have a degree of bigness, and that X’s degree of bigness is greater than Y’s. If we say that X is the biggest among X, Y, and Z, we think that X, Y, and Z each has a degree of bigness, and that X’s degree of bigness surpasses not only Y’s but also Z’s. This degree of markedness, in turn, is manifested in the quantity of coding in the positive, comparative, and superlative forms of adjectives (also see §.). 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 2 T Y P O L O G I C A L A S Y M M E T R Y

7.2.2 Grammatical behaviour Typological asymmetry is also observed in terms of grammatical behaviour, namely (i) morphological distinctions to be made for a given grammatical category or (ii) grammatical contexts in which a given category can appear. These two characteristics will be illustrated in turn. For instance, consider the gender distinction across different values of number. As has already been pointed out, the plural is more marked than the singular, and the dual is more marked than the plural. In English, the third person pronouns have morphological distinctions of gender in the singular (i.e. he [masculine], she [feminine], and it [neuter]), but these gender distinctions are not made in the plural. The single plural pronoun form they is used regardless of gender. Thus, the morphological distinctions maintained in the unmarked value, i.e. the singular, are neutralized in the marked value, i.e. the plural. Similarly, in classical Arabic morphological distinctions in gender must be made for the third person pronouns in the singular and the plural, but not in the dual, for which a single form is used regardless of gender. () Classical Arabic (Semitic; Afro-Asiatic)    : huwa hum humā : hiya hunna humā Cross-linguistic evidence for this particular typological asymmetry comes from Siewierska (: –): although there are exceptions (for instance, when the gender distinction is not based on sex but on humanness or animacy), of the  languages in Siewierska’s sample that have gender in their independent personal pronouns,  have gender only in the singular, as opposed to  with gender in the nonsingular as well as in the singular, but there is only one language with gender in the non-singular but not in the singular. Greenberg ( []) provides more examples in support of this kind of typological asymmetry. For instance, in German (Germanic; Indo-European: Germany) both the article and the adjectival declension have the same forms for the masculine, feminine, and neuter gender in the plural, while they have different forms in the singular (e.g. der Mann ‘the man’, die Frau ‘the woman’, and das Auto ‘the car’ vs die Männer ‘the men’, die Frauen ‘the women’, and die Autos ‘the cars’). In Hausa (West-Chadic; Afro-Asiatic: Nigeria and Niger), the masculine–feminine distinction is made for nouns and pronouns in the 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

singular but not in the plural. Gender is not the only grammatical category where the different values of number make asymmetrical choices, as it were. For instance, in classical Latin (Italic; Indo-European) the distinction between the dative and ablative cases, generally maintained in the singular, is not made in the plural. In languages such as Danish (Germanic; IndoEuropean: Denmark), German (Germanic; Indo-European: Germany), and Russian (Slavic; Indo-European: Russia), where the adjective agrees with the noun in number and gender, the gender distinctions made in the singular are not maintained in the plural. Related to the lack of morphological distinctions in the marked value of a grammatical category is the use of more regular (or less irregular) morphology in the marked value, as opposed to more irregular (or less regular) morphology in the unmarked value (Greenberg  []: ; Bybee ). The lack of morphological distinctions implies morphological uniformity, which may contribute to regular morphology. Greenberg ( []) provides a number of supporting examples. For instance, in German (Germanic; Indo-European: Germany) all dative plural forms have regular -en or -n depending on the phonological environment in which they occur but the morphology of the dative singular form is irregular and is determined by gender and declensional classes. In classical Arabic (Semitic; Afro-Asiatic), the basic verb, as opposed to derived verbs such as the intensive and causative, shows variation in the internal vowel of the imperfect. The basic verb has multiple allomorphs, while all the derived verbs have only one allomorph. In other words, the derived intensive or causative verb stems have regular morphology while the basic verb stem has irregular morphology. Moreover, in Arabic, Hebrew, and Aramaic (all Semitic; Afro-Asiatic: Northern Africa and Middle East), the basic form of the verb differentiates several subtypes by means of a difference in the vowel of the second syllable in the perfect, but this distinguishing feature does not exist in all the derived forms. In Spanish (Romance; Indo-European: Spain) tense/aspect morphology, a similar situation can be observed (Bybee : ). The present is unmarked as opposed to the two pasts, i.e. preterite and imperfective. Moreover, in terms of aspect in the past tense the preterite is unmarked while the imperfective is marked. Irregular morphology, in terms of vowel and consonant alternations in the stem, is almost exclusively confined to the present and preterite while the imperfective is characterized by regular morphology. For instance, a great number of verbs in the present tense have vowel alternations, e.g. cuento ‘tell (PRES.SG)’ vs contamos ‘tell 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 2 T Y P O L O G I C A L A S Y M M E T R Y

(PRES.PL)’, and a dozen or so verbs in the present tense have consonant alternations, e.g. tengo ‘take (PRES.SG)’ vs tenemos ‘take (PRES.PL)’. Furthermore, in the preterite more than a dozen verbs have stems that differ very much from the present or infinitive form, e.g. poner ‘to put’ vs puse ‘put (PRETERITE.SG)’. In the imperfective, however, the stem morphology is almost completely regular, with only one exception (i.e. ser ‘to be’, with its imperfective stem er-). The other behavioural characteristic of typological asymmetry is grammatical context: if marked values of a category occur in a number of grammatical contexts, unmarked values of that category occur in at least the same grammatical contexts, but not the other way round (hence typological asymmetry). Put differently, unmarked values may potentially be more widely distributed in grammar than marked values but marked values may never be more widely distributed in grammar than unmarked values. Consider grammatical relations, e.g. subject, direct object. It has been proposed that subject is less marked than direct object, direct object less marked than indirect object, and so on, to the effect that a markedness hierarchy of grammatical relations can be established (e.g. Keenan and Comrie ; for detailed discussion, see Chapter ): () Subject > Direct Object > Indirect Object > Oblique (where ‘>’ means ‘less marked than’) For instance, all languages are able to relativize on at least subject, the least marked grammatical relation. There are languages that can relativize on subject only (e.g. Malagasy (Oceanic; Austronesian: Madagascar)). There are also languages that can relativize on subject and direct object only (e.g. Luganda (Bantoid; Niger-Congo: Uganda)), on subject, direct object, and indirect object only (e.g. Basque (Basque: France and Spain)), and so on down the hierarchy. Conversely, if a language relativizes on indirect object, it also relativizes on subject and direct object. More generally speaking, if a language relativizes on a grammatical relation on the hierarchy, it also relativizes on grammatical relations to the left, but not necessarily to the right, of the hierarchy. Thus, within individual languages, less marked grammatical relations are more likely to operate in the grammatical context of relative clauses than more marked grammatical relations are. Moreover, cross-linguistically, less marked grammatical relations are likely to operate in the grammatical context of relative clauses in more languages than more marked grammatical relations are. For instance, there are more languages that 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

can relativize on direct object than languages that can relativize on indirect object, or more languages that can relativize on indirect object than languages that can relativize on oblique. The same markedness hierarchy of grammatical relations is also generally known to operate in the grammatical context of agreement, typically, with the verb (Moravcsik : ; Blake : –, ; Blake : ).4 Languages, especially Indo-European ones, normally have agreement with subject only, but in languages such as Swahili, the verb agrees not only with subject but also with direct object; in languages such as Georgian, the verb agrees with subject, direct object, and indirect object; a few languages are known to have agreement not only with subject, direct object, and indirect object but also with oblique (e.g. Pintupi-Luritja, where agreement expressions appear as enclitics to the first constituent of the clause, be it a verb or whatever else). ()

Spanish (Romance; Indo-European: Spain) El hombre ama a la mujer the man love.SG.PRES OBJ the woman ‘The man loves the woman.’

()

Swahili (Bantoid; Niger-Congo: Tanzania) Ali a-na-m-penda m-wanamke m-rembo Ali SG.PRES-SG-love M-woman M-beautiful ‘Ali loves a beautiful woman.’

()

Georgian (Kartvelian: Georgia) Rezo-m gačuka samajuri šen Rezo-ERG you.gave.SG.it bracelet-NOM you-DAT ‘Rezo gave you a bracelet.’

()

Pintupi-Luritja (Western Pama-Nyungan; Pama-Nyungan: Australia) ngurra-wana-Ø=tjananya=pula ngarama minyma camp-along-NOM-PL.COM-DU.SBJ stood women pini-ngka many-COM ‘Those two stood in the camp with the women.’

4 There are languages whose agreement systems may not be fully explained by means of grammatical relations alone. For instance, in some languages, agreement is determined by reference to person (e.g. first and second vs third person) (Blake : ). Moreover, there are cases where agreement requires reference not only to grammatical relations but also to case (Corbett : –). For further discussion, see Chapter .



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 2 T Y P O L O G I C A L A S Y M M E T R Y

For instance, in () the verb agrees with subject (as indicated by the prefix a-) as well as with direct object (as indicated by the prefix m-); in () the two agreement expressions—one for subject (i.e. =pula), and the other for oblique/comitative (i.e. =tjananya)—are cliticized to the end of the first constituent of the clause, in this case, the locative expression, which hosts its own oblique suffix -wana. To wit, if there is agreement with oblique, there is also agreement with subject, direct object, and indirect object; if there is agreement with indirect object, there is also agreement with subject and direct object; and if there is agreement with direct object, there is also agreement with subject. The converse is not necessarily the case, e.g. agreement with indirect object but not with subject. Thus, less marked grammatical relations, e.g. subject, are more likely to operate in the grammatical context of agreement (typically, with the verb) than more marked grammatical relations, e.g. indirect object. Moreover, less marked grammatical relations occur in at least as many sentence patterns as more marked grammatical relations do. By sentence pattern is meant here the type of sentence based on the transitivity of the verb, namely intransitive (e.g. The man chuckled), transitive (e.g. The man kicked the ball), and ditransitive (e.g. The man gave the book to the woman). Generally speaking, subject occurs in all three sentence patterns, direct object in the transitive and ditransitive patterns only, and indirect object in the ditransitive pattern only. Thus, subject occurs with or without direct object or indirect object, but direct or indirect object occurs with subject; direct object occurs with or without indirect object, but indirect object occurs with direct object. To put it differently, indirect object presupposes the presence of subject and direct object, direct object presupposes the presence of subject (but not necessarily the presence of indirect object), but subject does not presuppose the presence of either direct or indirect object (e.g. The man chuckled). Hawkins (: –) attempts to explain this particular example of typological asymmetry by demonstrating that the lower grammatical relations are located on the hierarchy in (), the more structural information they require to be structurally integrated into the overall sentence structure. For instance, for direct object to be structurally integrated into the sentence structure, structural information pertaining not only to that grammatical relation but also to subject is required; for indirect object to be structurally integrated into the sentence structure, structural information relating not only to that grammatical relation but also to subject and direct object is necessary. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

The structural integration of subject into the sentence structure, however, only requires information pertaining to that grammatical relation.5 Thus, the markedness hierarchy of grammatical relations in () is manifested in different grammatical contexts, e.g. relativization, agreement, and sentence patterns, and less marked grammatical relations occur in at least as many grammatical contexts as more marked grammatical relations do, but not necessarily the converse. To wit, typological asymmetry can also be observed in the manner in which the different grammatical relations do or do not play a crucial role in and across different grammatical contexts.

7.3 Economy and iconicity (in competition) Various instances of typological asymmetry can be and have indeed been ‘explained’ with reference to the distinction between unmarked and marked: one value of a given category is unmarked and the other value of that category marked. Unmarked values are thought to be normal, natural, familiar, typical, or expected, while marked values are abnormal, unnatural, unfamiliar, atypical, or unexpected. This difference between unmarked and marked values, in turn, is reflected in typological asymmetry, whether in formal coding or in grammatical behaviour (Bybee : ). Useful though the distinction between unmarked and marked may be, explanations based on the concept of markedness themselves call for further ‘deeper’ explanations, because the question inevitably arises as to, for instance, what makes unmarked values normal, natural, or expected in the first place. Consider the markedness contrast between oral and nasal vowels, discussed in §.. What makes oral vowels normal or natural when nasal vowels are not? What exactly does it mean to say that oral vowels are normal or natural? Not surprisingly, ‘deeper’ explanations for various contrasts between unmarked and marked in phonology are likely to be found in articulatory and/or acoustic phonetics, i.e. the physics of speech (Croft : ). 5

The oblique relation is rather complicated because it may occur with the subject only (i.e. intransitive, e.g. Mary danced in the park) and also because in many languages, the oblique is not structurally different from the indirect object in that they both have the same surface syntax, e.g. to John, as in Mary gave the book to John, where the indirect object is marked by the same oblique preposition to, as in e.g. John walked to the station. For further discussion, see Hawkins (: –).



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 3 E C O N O M Y A N D I C O N I C I T Y (I N C O M P E T I T I O N )

The contrast between oral and nasal vowels may indeed be explained in terms of articulatory gestures and/or acoustic properties, e.g. production of nasal vowels involving more articulatory gestures than that of oral vowels (Greenberg  []: )).6 However, there may be motivating factors other than articulatory ease/difficulty in some of the contrasts between unmarked and marked in phonology. For instance, Maddieson (: ) provides cross-linguistic data that reveal that % of the  languages with the voicing contrast in fricatives in his sample of  languages also have the voicing contrast in plosives. This points to a strong tendency for a voicing contrast in fricatives to occur in languages in conjunction with a voicing contrast in plosives—to the effect that if a language has a voicing contrast in fricatives, it is highly likely that it also has a voicing contrast in plosives. While the comparative rarity of voiced fricatives can perhaps be explained by the fact that fricatives are physically more difficult to produce with the vocal cords closed (i.e. voiced) than with the vocal cords open (i.e. voiceless), a different type of explanation—i.e. non-aerodynamic—may be needed to explain the tendency for a voicing contrast in fricatives to co-occur with a voicing contrast in plosives. Maddieson (: ) points to economy as a possible basis for making sense of the tendency in question: it may be more economical to recombine a number of gestures or features in different consonants than to have a unique set of gestures for every consonant. This way, the number of distinct motor and perceptual patterns that must be mastered by a speaker can be reduced (Lindblom , ). The concept of economy is not new in linguistics, and has indeed proven to be very useful when explaining various contrasts between unmarked and marked, particularly outside the domain of phonology. The idea goes back to the American linguist George Zipf (–), who highlighted the tendency in language to give reduced expression to what is familiar or predictable (Zipf ; Haiman ; cf. Horn ). This tendency is thought to be economically motivated in the sense that what is familiar or predictable does not need to be coded—if coded at all—as much as what is unfamiliar or unpredictable. It will indeed be a waste of energy to give full expression to the familiar

6

Another example of articulatory difficulty may be the obligatory appearance of unmarked /t/ in the syllable-final position in languages such as German. This is said to be due to obstruent voicing being physically inhibited in the syllable-final position (for detailed discussion, see Blevins : –).



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

or predictable. Conversely, if reduced expression were given to the unfamiliar or unpredictable, there might be a great deal of difficulty, if not a risk of miscommunication, in understanding what is being communicated. For instance, consider the lack of overt number coding, as demonstrated for languages such as Pirahã () and Chrau (). In these languages, both the singular and the plural forms remain the same without any overt number coding. Since overt number coding is dispensed with altogether, whether a given noun form is understood as the singular or the plural has to be determined on the basis of context alone (e.g.   vs  , where the preceding numerals indicate whether the singular or the plural is involved). In other words, the lack of overt number coding is economically motivated in the sense that no overt expression (of number) is obligatory. In Bayso (), in contrast, both the singular and the plural are overtly coded for their number values, whether unmarked or not (e.g. – vs –). Since expressing the number values—or anything for that matter—overtly is costly, we can say that the number coding in Bayso is not economically motivated at all. This overt number coding in Bayso or other similar languages points to the existence of one other principle or factor that has frequently been made use of, along with economy, when accounting for typological asymmetry, namely iconicity (see Haiman  for a classic study of iconicity). Iconicity captures the fundamental principle in communication that the structure of language reflects the structure of experience or, specifically, the structure of the world as experienced by the human mind. The fact that the meaning of singularity and the meaning of plurality are coded overtly by the singular morpheme and the plural morpheme, respectively, is a good illustration of iconicity. Whether there is a single instance of X (i.e. singularity) or multiple instances of X (i.e. plurality) is something that the human mind recognizes. These two meanings, as experienced by the human mind, are matched formally by the two distinct number morphemes in languages such as Bayso. In other words, there is one to one correspondence between meaning and form. A similar example of iconicity comes from the typological asymmetry in the correspondence between the meanings of the positive, comparative, and superlative forms of adjectives, and the gradual increase in the amount of overt coding. As discussed in §., the positive meaning is basic, the comparative has more meaning than the positive, and the superlative has more meaning than the 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 3 E C O N O M Y A N D I C O N I C I T Y (I N C O M P E T I T I O N )

positive or the comparative. This increasing quantity of meaning is mirrored iconically by the increasing quantity of form, i.e. the positive the least coded (typically zero coding), the superlative the most, and the comparative in between. Iconicity can also be invoked to explain the asymmetric relationship between the basic and derived causative verb, discussed in §.: the causative verb invariably has an overt expression of causation, while the basic verb lacks any overt coding for the absence of causation. This typological asymmetry can also be seen to be iconically motivated in the sense that the causative meaning, expressed by the derived verb, is more complex than the non-causative meaning, expressed by the basic verb. This additional complexity of meaning is reflected iconically by the presence of the causative morpheme suffixed to the basic verb (e.g. Turkish koş- ‘run’ vs koş-tur-‘cause to run’ (runCAUSATIVE-)’).7 While the foregoing examples suggest that economy and iconicity may play a role in language structure independently of each other, they can also be seen to come into competition with each other. For instance, recall those languages that do not code the singular overtly, while coding the plural overtly, e.g. Turkish (). Number coding in these languages can be said to be both economically and iconically motivated; the meaning of singularity is inferred from context (i.e. economy), while the meaning of plurality is explicitly expressed by 7

Another classic example of iconicity comes from the way different causative types are used to express different causative meanings. For instance, when the causer physically manipulates the causee, e.g. feeding someone by force, the causative type chosen is the one in which the expression of cause and the expression of effect are physically adjacent to each other, e.g. in a single word (i.e. the so-called lexical or morphological causative type). When no physical manipulation is involved in the causer’s action, e.g. the causer verbally instructing the causee to eat, the causative type used is the one in which the expression of cause and the expression of effect appear as separate words (i.e. the so-called syntactic causative type). This contrast, illustrated by Mixtec in (), is iconically motivated in that the structure of language reflects the structure of experience. The causative in (a) expresses manipulative action on the part of the causer, while the causative in (b) does not. This difference is iconically reflected by the different causative types used, i.e. the morphological causative in (a) and the syntactic causative in (b) (see Haiman : –). () Mixtec (Mixtecan; Oto-Manguean: Mexico)

a. s-kée CAUS-eat ‘Feed him.’ b. sáʔà hà cause NOMN ‘Make him eat.’

nà OPT

kee eat



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

means of the plural morpheme (i.e. iconicity). Thus, economy and iconicity come into conflict with each other, and this conflict is resolved in favour of economy in the case of the singular (zero coding), and in favour of iconicity in the case of the plural (non-zero coding). Put differently, economy and iconicity are partially, not fully, satisfied in the number coding in languages such as Turkish. In fact, this concept of conflict resolution may be extended to the case of overt singular–plural coding in languages such as Bayso, which can now be seen as an instance in which the competition between economy and iconicity is resolved completely in favour of iconicity, and also to the case of zero singular–plural number marking in languages such as Pirahã, which can be regarded as an instance in which the competition between economy and iconicity is resolved completely in favour of economy. The conflict between iconicity and economy in the case of number coding, and the way this conflict is resolved, whether partially or completely, can be summarized: ()

Iconicity Economy a. Singular–Zero & Plural–Non-Zero: Yes [p] Yes [p] b. Singular–Non-Zero & Plural–Non-Zero: Yes No c. Singular–Zero & Plural–Zero: No Yes Note: p = partially

Based on the typological asymmetry in (), it has been proposed, as mentioned earlier, that if a language codes the singular overtly, it also codes the plural overtly, not necessarily the converse. Thus, the question arises as to why iconicity and economy are resolved in such a way that we have an attestation of (a), but not the logically possible permutation of non-zero coding of the singular and the zero-coding of the plural (i.e. also an instance of economy and iconicity being partially satisfied, albeit in a different way). Greenberg ( []) provides an initial insight into this question, as will be further explored in the next section.

7.4 Typological asymmetry = frequency asymmetry?: iconicity vs frequency One of the major conclusions that Greenberg ( []) arrives at when discussing many examples of the contrast between unmarked and marked is that unmarked values occur in languages more frequently than marked values do. So much so that ‘marked simply means 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 4 T Y P O L O G I C A L A S Y M M E T R Y

=

FREQUENCY ASYMMETRY?

definitionally less frequent and unmarked means more frequent’ (Greenberg  []: ), or ‘unmarked is merely the name that we give to the category which exhibits . . . higher frequency [than marked]’ (Greenberg  []: ). Indeed Greenberg ( []) provides a decent amount of statistical data from individual languages to demonstrate that the unmarked value of a category is attested more frequently than the marked value of that category. For instance, he ( []: ) reports, based on Ferguson and Chowdhury (), that in Bengali the ratio of oral to nasal vowels was  to . Similarly, the unmarked vs marked values of number, i.e. the plural more marked than the singular, the dual more marked than the plural, display relative frequencies as expected, i.e. the singular more frequent than the plural, and the plural more frequent than the dual (where the dual value also exists), as in () (Greenberg  []: ): ()        Sanskrit , .% .% .% Latin , .% .% Russian , .% .% French , .% .% The markedness relationship between the positive, comparative, and superlative forms of adjectives also exhibits the expected pattern of frequencies (Greenberg  []: ):8 ()     Latin .% .% .% Russian .% .% .% These data clearly point to a correlation between markedness and frequency. Notably, however, Greenberg ( []) does not dispense with the concept of markedness altogether in favour of frequency (see below as to why). Haspelmath (), in contrast, argues that the concept of markedness is superfluous and should be abandoned in favour of frequency, or accurately frequency of use, as the basis for the wide range of phenomena that have been identified and investigated under the purview of 8 The difference in frequency between the comparative and the superlative in () is probably statistically non-significant, although this is not something Greenberg ( []) was concerned about.



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

markedness. Haspelmath () is also of the view that iconicity— unlike economy—does not have a place in explaining typological asymmetry, which can also be explained (better) in terms of frequency—or articulatory difficulty when typological asymmetry concerns phonology. In particular, Haspelmath invokes the Zipfian idea of economy (aka the principle of least effort): ‘[t]he more predictable the sign is, the shorter it is’. Since predictability is entailed by frequency (i.e. X is predictable because it occurs frequently enough), he infers that ‘[t]he more frequent the sign is, the shorter it is’ (cf. Zipf ’s (: ) classic dictum: ‘High frequency is the cause of small magnitude’).9 For instance, in Turkish (), Imbabura Quechua (), and Niuean (), it is the plural that is overtly coded while the singular appears without any overt number coding. According to Haspelmath (, ), this asymmetrical coding is to be expected because it is the singular that is used more frequently than the plural (as supported by the statistical data in ()). The more frequently occurring (=more predictable) singular requires less formal coding—in this particular case, zero coding— than the less frequently occurring (=less predictable) plural, which is overtly coded. Frequency of use can also explain the differential quantity of coding in the positive, comparative, and superlative forms of adjectives. As can be seen from (), it is the positive form of adjectives that is more frequent than the comparative or superlative form of adjectives, and the comparative, in turn, is more frequent than the superlative. Haspelmath argues that there is no point in calling the positive least marked and the superlative most marked—with the comparative in between—when the difference in formal coding between them is economically motivated by frequency of use. Thus, the concept of markedness is claimed to be otiose, as frequency alone can explain the coding asymmetry in question. This particular example also raises the question as to whether iconicity also is superfluous. Recall that iconicity is invoked to explain the coding asymmetry displayed in the positive, comparative, and superlative forms of adjectives: 9 If we used short forms for infrequently occurring signs, and long forms for frequently occurring signs (i.e. in an anti-Zipfian manner), we would be using our limited time and effort uneconomically. In other words, we would be spending more time and effort on speaking instead of something else (e.g. finding or producing food among other things). It is as if we were biologically programmed to use our time and effort economically because life has a time limit and we have a limited amount of energy. Thus, we do not waste our limited resources in speaking to the effect that the more frequently a sign is used, the shorter it becomes.



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 4 T Y P O L O G I C A L A S Y M M E T R Y

=

FREQUENCY ASYMMETRY?

the positive meaning is basic, the comparative has more meaning than the positive, and the superlative has more meaning than the positive or the comparative. This increasing quantity of meaning is claimed to be iconically reflected in the formal asymmetry in question. Again, Haspelmath (: –) argues that there is no need to appeal to iconicity here because economy alone explains the formal asymmetry satisfactorily: the more frequent the sign is, the shorter it is. Indeed, the positive form of adjectives, being the most frequent of the three, is zerocoded (e.g. Hungarian jó ‘good’), and the superlative, being the least frequent, has a greater quantity of coding than the positive or the comparative (e.g. Hungarian leg-jobb ‘best’) does. As predicted, the comparative form, less frequent than the positive but more frequent than the superlative, is in between the positive and the superlative in terms of coding quantity (e.g. Hungarian jobb ‘better’). More seriously, as Haspelmath (, ) points out, markedness or iconicity makes ‘wrong predictions’, i.e. so-called ‘counter-iconic’ examples, e.g. unmarked values non-zero-coded and marked values zero-coded, while frequency of use makes correct predictions. For instance, the typological asymmetry between the basic verb and the causative counterpart is a case in point. In the majority of the world’s languages, the causative verb invariably has an overt expression of causation, while the basic verb lacks any overt coding for the absence of causation. The concept of markedness has been used to explain this typological asymmetry by identifying the basic verb as unmarked and the causative verb as marked. Moreover, this typological asymmetry has been thought to be iconically motivated in that the causative verb has a more complex meaning than the basic verb (namely, the former embodies the concept of causation, which, in turn, entails the existence of the causer). There are, however, languages where the opposite or counter-iconic situation occurs, that is, the causative is zero-coded while the non-causative verb is non-zero-coded, as exemplified in (). ()

Causative Verbs Japanese: war‘(X) break (Y) Swahili: vunja- ‘(X) break (Y)’

Non-causative Verbs war-e ‘(Y) break’ vunj-ika ‘(Y) break’

In (), it is the causative verb that lacks any overt coding (for the meaning of causation) in comparison with the non-causative (aka anticausative) verb, which is overtly coded (i.e. -e in Japanese (isolate: 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

Japan) and -ika in Swahili (Bantoid; Niger-Congo: Tanzania)). This situation is problematic for the concept of markedness: the unmarked verb form has overt coding, while the marked verb form lacks overt coding. Moreover, the situation runs counter to iconicity in that what is thought to have a complex meaning (i.e. causative) is zero-coded while what is thought to lack such a complex meaning (i.e. non-causative) is non-zero-coded. To make matters worse, for other causative–noncausative pairs, as predicted by markedness or iconicity, the causative verb has overt coding while the non-causative verb lacks overt coding: ()

Causative Verbs Japanese: koor-ase ‘(X) freeze (Y)’ Swahili: gand-isha ‘(X) freeze (Y)’

Non-causative Verbs koor- ‘(Y) freeze’ ganda- ‘(Y) freeze’

For Haspelmath (), however, the counter-iconic situation, illustrated in (), does not pose a problem, because it can be explained by frequency of use without any difficulty. Moreover, frequency of use can also take care of the iconic situation, illustrated in (). The reason why both the iconic and the non-iconic situation coexist is, according to Haspelmath (: ), that some events, e.g. ‘freezing’, ‘drying’, ‘melting’, etc., ‘do not often require input from an agent to occur, whereas [other events, e.g. ‘breaking’, ‘splitting’, ‘opening’, etc.] tend not to occur spontaneously but must be instigated by an agent’ (also see Haspelmath ; Haspelmath et al. ). For instance, the door normally does not break by itself, while water freezes spontaneously (i.e. if and when the temperature is low enough). In so far as ‘breaking’ is concerned, the presence of an instigating agent is more frequent or likely than the absence of an instigating agent. In contrast, in the case of ‘freezing’, the converse is the case: the absence of an instigating agent is more frequent or likely than the presence of an instigating agent. Thus, Haspelmath argues that events such as freezing, drying, or melting tend to occur more frequently as non-zero-coded causatives (together with zero-coded non-causatives) while events such as breaking, splitting, or opening tend to occur more frequently as non-zero-coded non-causatives (together with zero-coded causatives). To wit, formal coding is based on frequency of use: the more frequent X is, the less coding X requires, or conversely, the less frequent X is, the more coding X requires. This is summarized in ().



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 4 T Y P O L O G I C A L A S Y M M E T R Y

()

=

FREQUENCY ASYMMETRY?

   (= )

   (= -)

 coding

more frequent zero

less frequent non-zero

 coding

less frequent non-zero

more frequent zero

Based on this reasoning, it can be predicted (i) that the non-causative verb meaning ‘(the door) break(s)’ is to be non-zero-coded (e.g. Swahili vunj-ika) while the causative verb meaning ‘(someone) break(s the door)’ is to be zero-coded (e.g. Swahili vunja-); and (ii) that the causative verb meaning ‘(someone) freeze(s water)’ is to be non-zero-coded (e.g. Swahili gand-isha) while the non-causative verb meaning ‘(water) freeze(s)’ is to be zero-coded (e.g. Swahili ganda-). Further statistical evidence along these lines is provided by Haspelmath et al. () on the basis of corpus data from seven languages. Recall from §. that more regular (or less irregular) morphology is attested in the marked value, as opposed to more irregular (or less regular) morphology in the unmarked value. ‘It may seem counterintuitive that unmarked [, not marked, values] have more [morphological] irregularity’ (Bybee : ), when ‘unmarked’ is interpreted to mean normal or natural, and ‘marked’ abnormal or unnatural. What is normal or natural is not expected to be irregular—probably because normal is assumed to be regular—and thus the correlation between unmarked and morphological irregularity (or between marked and morphological regularity) is a conundrum for the concept of markedness. Following Bybee’s () work, however, Haspelmath (, ) explains that the use of more regular (or less irregular) morphology in the marked value, as opposed to more irregular (or less regular) morphology in the unmarked value, is also economically motivated in that more frequently occurring values can tolerate or accommodate irregular morphology, as it is not difficult to store irregular morphology in memory and retrieve it, because of their high frequency, while less frequently occurring values may not be able to accommodate irregular morphology because they are not frequent (so it will be difficult to remember it, to begin with). To wit, high frequency 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

of use renders ‘the mental representation of unmarked forms . . . strong and accessible, making them unlikely to change’ (Bybee : ). In contrast, marked forms, being infrequently used, are susceptible to change, in particular, analogical regularization, which reduces or eliminates morphological irregularity. In English, for example, frequently used verbs, e.g. sleep /sli:p/, retain their irregular past tense forms, e.g. slept /slɛpt/, while the irregular past tense forms of infrequently used verbs, e.g. leapt /lɛpt/, have nearly lost ground to their regularly formed competitors, e.g. leaped /li:pt/, based on the basic stems, e.g. leap /li:p/. Frequency of use or, generally, economy seems to account for different typological asymmetries and other related matters. Thus, as Haspelmath () argues strongly, the concept of markedness is superfluous and should be abandoned in favour of frequency of use or economy. More or less the same can be said of iconicity in that many claimed ‘core’ cases of iconicity can be explained likewise in terms of frequency of use or economy (Haspelmath ). Before, however, we accept Haspelmath’s conclusions wholeheartedly, there are a few things that require our attention. First, although he provides a good amount of data, Haspelmath’s (, ) claims against markedness or iconicity and in favour of frequency of use need to be supported by considerably more data from considerably more languages. In point of fact, Haspelmath (: ) follows ‘the practice of Haiman () (and much other work) of making claims about universal asymmetries that are not fully backed up by confirming data’. This may seem to be a reasonable or acceptable approach in the present state of our knowledge but because what is being tested is none other than frequency of use, much more corpus data are required to support Haspelmath’s claims. Corpus data are required because of frequency counts that need to be done in order to test the claims, and data from a wide range of languages are required because Haspelmath’s claims are tested for their cross-linguistic validity. Admittedly, such a large amount of corpus data may be hard to come by, because access to corpus data in a wide range, or a large number, of languages—especially understudied or undocumented languages—is highly unlikely to be possible. Nonetheless in order to test frequency of use, there does not seem to be any other way. Second, Bybee (: ) raises the question: ‘Are frequency and economy the same thing?’ It seems that Bybee (: –) likes to treat them separately, because some have interpreted the concept of 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 4 T Y P O L O G I C A L A S Y M M E T R Y

=

FREQUENCY ASYMMETRY?

economy as ‘invok[ing] an unfortunate teleology that makes change seem goal-directed’ in the sense that economy drives language change from less to more economically motivated, while frequency of use does not invoke such goal-directedness. Haspelmath (, ), in contrast, seems to be somewhat ambivalent on this question, as he sometimes subsumes frequency of use under economy and at other times he keeps them apart. Of course, whether frequency of use can or cannot be subsumed under economy may depend on how economy is defined—frequency of use, in contrast, is straightforward and easy to measure. Moreover, articulatory difficulty, which Haspelmath () considers separately from frequency of use, may also be subsumed under economy, depending on one’s definition of economy. For instance, it requires less articulatory effort to produce voiceless than voiced obstruents in the syllable-final position. This clearly is related to the concept of economy or the principle of least effort (see n. ). At the same time, the fact that voiceless, not voiced, obstruents occur in the syllable-final position may entail that voiceless obstruents occur in more phonetic environments than voiced obstruents do. This, in turn, may suggest that voiceless obstruents are more frequent than voiced counterparts. It seems to be useful to see what exactly is the relationship between economy and frequency of use. Third, one of Haspelmath’s general claims, based on Bybee’s () work, is that the use of regular morphology in the marked value, as opposed to irregular morphology in the unmarked value, is also economically motivated. Hence no need for invoking the concept of markedness. Morphological regularity is achieved by way of regularization of erstwhile irregular forms, typically by analogy (Bybee : ). But this begs the question as to why marked values—just as unmarked values—exhibited irregular morphology in the first place, albeit subsequently levelled to regular morphology by analogy. What motivates marked values to have irregular morphology originally? This seems to suggest that in order to answer questions like this, we may still need to appeal to something else, e.g. iconicity—morphological distinctions reflecting conceptual distinctions. Fourth, Haspelmath (: ; : ) laments the fact that ‘most linguists have shown little interest in explaining structure in terms of [frequency of] use’ or ‘many linguists have lost sight of [the principle of economy] for a few decades[, Zipf ’s () important work notwithstanding]’. Why have linguists neglected, if not ignored, frequency of use or economy when explaining structural properties, including 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

typological asymmetry? Recall that Greenberg ( []), unlike Haspelmath (), stops short of replacing the concept of markedness with frequency of use, although he provides a decent number of frequency counts to support his explanations for various instances of the unmarked–marked distinction: unmarked values being more frequent than marked values. Greenberg ( []: ) has a good reason for not allowing frequency to take the place of markedness, when he says that ‘frequency is itself but a symptom and the consistent relative frequency relations which appear to hold for lexical items and grammatical categories are themselves in need of explanation’ (emphasis added). In other words, Greenberg ( []) clearly does not regard the role that frequency of use plays as a causal one. It is one thing to claim that frequency accounts for such and such typological asymmetries but it is quite another thing to take frequency itself as what is ultimately responsible for their emergence or existence. For something to be frequent or infrequent, there must be something else at work, at a deeper level. This is not to deny that frequency of use may facilitate the emergence of typological asymmetry to a certain degree. That is, if X causes Y to be frequent and frequent enough, the form of Y may become shorter. Moreover, as X causes Y to be used more and more frequently, the already shortened form of Y may become even shorter, as Zipf observed insightfully many decades ago. Thus, frequency itself may be seen to contribute to the shortening of the form of Y, but it is ultimately X that caused Y to be used frequently and frequently enough. In other words, frequency may play a facilitative, not causal, role. To give a similar (non-linguistic) example, people who commit crimes frequently are likely to be incarcerated so that we can say that the more crimes one commits, the more likely one is to spend time in prison. Does this mean that the frequency of crimes is the cause of incarceration itself? Probably not. We are more inclined to say that committing crimes is the cause of incarceration. Moreover, we may go even further and try to ascertain why one commits crimes in the first place. There may be other deeper causes of crime, e.g. poverty, poor parenting, bad peer pressure, or even genetic disposition. Whatever the cause or causes of crime may be, frequency of crime certainly is not the cause of crime. Nevertheless, the more crimes one commits, the easier one may find it to commit a crime, and one is thus likely to commit more crimes. In this case, frequency may play a facilitative role, but it does not play a causal role. This facilitative role, however, 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

7. 5 C O N C L U D I N G R E M A R K S

may be (mis)interpreted to be a causal one because the likelihood of incarceration seems to be strongly related to, or associated with, the frequency of crimes. Haspelmath (: ) seems to be aware of something close to the facilitative role of frequency when he remarks that ‘in some cases frequency of use explains conceptual or cognitive ease [i.e. iconicity], [and] in other cases it is the other way round’. What this suggests is that in those cases where ‘frequency of use explains conceptual or cognitive ease’, the role that frequency of use plays may well be a facilitative one, which may look as if it were a causal one. Haspelmath () is correct in saying that the concept of markedness has been used in too many different senses. For that reason alone, the concept may indeed have lost its value for scientific purposes (Bybee : ). Haspelmath is also correct in recommending that markedness should be replaced by ‘other concepts and terms that are less ambiguous, more transparent and provide better explanations for the observed phenomena’ (e.g. conceptual, cognitive, articulatory ease) (: ). However, frequency cannot be one of those concepts and terms, because it is merely a symptom of motivating or causal factors (Greenberg  []: ; Croft : , – on the role of cognitive salience or expectedness, rather than frequency, in motivating formal coding).

7.5 Concluding remarks Typological asymmetry, whether in formal coding or grammatical behaviour, has been typically explained in terms of the concept of markedness. However, markedness has also been used in so many different senses that some linguists argue that it has lost its theoretical value. Typological asymmetry can be accounted for in terms of the interaction between two important forces in language structure: economy and iconicity. While economy and iconicity have been of great use in understanding typological asymmetry, it has recently been questioned whether iconicity, together with markedness itself, is necessary when other less ambiguous and more transparent concepts can provide better explanations of various instances of typological asymmetry. Frequency of use has been put forth by Haspelmath (, ) as one such concept. While frequency of use can indeed be appealed to in explaining typological asymmetry, the question needs to be raised as to whether it can actually take the place of other concepts, because 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

TYPOLOGICAL ASYMMETRY

frequency is merely a symptom of something deeper or more fundamental, e.g. conceptual or cognitive ease (iconicity?).

Study questions 1. Choose one short story (some freely available at http://www.classicshorts. com), determine the frequencies of the singular and the plural forms occurring in that text, and compare your results with Greenberg’s (as presented in () in §.). Discuss any problems or issues you may have had with doing frequency counts, and explain how you decided to resolve them. 2. Greenberg ( []: ) provides some frequency counts that show that the most frequent tense is the present, the next most frequent is the past tense, and the least frequent is the future: LANGUAGE

PRESENT

PAST

FUTURE

Sanskrit Latin

.% .%

.% .%

.% .%

While these data are very much in support of the claimed markedness relationships between the three tenses, frequency counts may also depend on the genre or type of text is examined. Do you agree? Why (not)? 3. Haspelmath (: ) argues that ‘frequency in texts [e.g. the singular more frequently used than the plural] has nothing to do with [the] frequency [of a given entity] in the [real] world’. Do you agree with him? Why (not)? 4. Witkowski and Brown (: ) make the following observation: In Tenejapa Tzeltal, a Mayan language spoken in the Mexican state of Chiapas, one [markedness reversal] has involved the relative cultural importance of native deer and introduced sheep . . . [A]t the time of the Spanish conquest, deer was designated cih. When sheep were introduced, they were equated with deer and were labelled tunim cih ‘cotton deer’ . . . Today, however, the Tenejapa term for sheep is cih while deer are labelled by the overtly [coded] expression teʔtikil cih ‘wild sheep’.

One may argue, along with Haspelmath (: ), that this instance of markedness reversal (see n. ) is based on frequency of use, because at the time of the Spanish conquest, deer were more important and more commonly talked about than sheep, which had just been introduced into Tenejapa Tzeltal’s community, while today sheep are more important and more commonly talked about than deer. Thus, the situation described by Witkowski and Brown is to be expected, because more coding is required for the less frequent (i.e. sheep at the time of the Spanish conquest



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

FURTHER READING

but deer at the present time) and less coding for the more frequent (i.e. deer at the time of Spanish conquest but sheep at the present time). However, one may take a different position by arguing that cognitive/ semantic complexity (i.e. iconicity) has to do with formal coding complexity here. That is, at the time of the Spanish conquest, the concept of deer was simple to Tenejapa Tzeltal speakers for the fact that they were familiar animals around, while the concept of the newly introduced sheep was complex to them (for instance, the concept of sheep had to be internalized on the basis of the familiar concept of deer, and whatever differences sheep had in comparison to deer had to be conceptualized); this relative cognitive/ semantic complexity has now been reversed to the effect that the concept of sheep is now simple for Tenejapa Tzeltal speakers while the concept of deer is complex for them. Which of these two positions do you agree with, and why? Further reading Bybee, J. (). ‘Markedness, Iconicity, Economy, and Frequency’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Greenberg, J. H. ( []). Language Universals: With Special Reference to Feature Hierarchies. Berlin: Mouton de Gruyter. Haiman, J. (). Natural Syntax: Iconicity and Erosion. Cambridge: Cambridge University Press. Haspelmath, M. (). ‘Against Markedness (and What to Replace It with)’, Journal of Linguistics : –. Haspelmath, M. (). ‘Frequency vs Iconicity in Explaining Grammatical Asymmetries’, Cognitive Linguistics : –. Jakobson, R. ( []). ‘Shifters, Verbal Categories, and the Russian Verb’, in Roman Jakobson: Selected Writings, vol. . The Hague: Mouton, –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8 Categories, prototypes, and semantic maps

8.1 Introduction



8.2 Classical vs prototype approach to categorization



8.3 Prototype category as a network of similarities



8.4 Prototype in grammar



8.5 Semantic maps: ‘geography of grammatical meaning’



8.6 Concluding remarks



8.1 Introduction In Chapter , we discussed grammatical entities such as number, gender, and tense. These grammatical entities are referred to commonly as categories or, specifically, grammatical categories, as they each represent a coherent set of grammatical meanings or values. For instance, consider the category of number. The distinctions in number, i.e. singular, dual, plural, are grouped together into one single category, as they all make reference to the concept of number (i.e. the quantity of X). In other words, the singular, dual, and plural distinctions have one conceptual property in common: number. Grammatical entities that do not share this conceptual property, e.g. tense or gender, will not be placed in the category of number. A similar comment can be made of gender, tense, and other grammatical categories. While this view of categories— X conceptualized either as a member (or as a non-member) of a given category—may seem straightforward and intuitively correct, there is much empirical evidence that categories are conceptualized and their

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 2 C L A S S I C A L V S P R O T O T Y P E A P P R O A C H T O C A T E G O R I Z A T I O N

members are organized in a rather different way, as will be discussed in this chapter. 8.2 Classical vs prototype approach to categorization Traditionally, scholars have been interested in understanding how humans categorize things in their environment, rather than abstract, grammatical entities such as number. Thus, the focus of categorization research has been placed primarily on lexical categories, which typically involve natural entities (e.g. bird, tree, fruit) or cultural artefacts (e.g. furniture, household receptacles, vehicle). Whether X belongs to category Y is traditionally claimed to depend on whether X exhibits all the defining properties of Y. Such defining properties are regarded as individually necessary: an entity is a member of a given category if and only if it possesses each and every defining property of that category. If it lacks even one of the defining properties of the category, an entity will, by definition, not be regarded as a member of the category. What this entails is that an entity is either a member or a non-member of a category, and cannot be a bit of both. To wit, category membership is strictly an either–or matter. For instance, consider the category of . Let’s assume, for the sake of illustration, that the two defining properties of this category are  and  (i.e.  defined as a two-footed animal). For X to be a member of the  category (e.g. farmer, nurse, psychologist), it must exhibit the two defining properties, i.e.  and ; otherwise, it is not a member of that category (e.g. cat, tree, computer). Related to this necessary condition on category membership is the claim that defining properties should also be jointly sufficient. Thus, if and when an entity possesses each and every defining property of a given category, that will be sufficient for the entity to be a member of that category. For instance, the presence of  and  will be sufficient for the entity referred to as farmer to belong to the  category. Thus, defining properties of a category determine strict category membership and demarcate clear-cut category boundaries. This is essentially the view of the so-called classical approach to human categorization. The classical approach is classical in two senses (e.g. Taylor : ): (i) it originates from the Greek philosopher Aristotle’s (– ) treatise on categories; and (ii) it had exerted 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

considerable influence, throughout much of the last century, on the way that categorization was understood not just in philosophy but also in psychology, anthropology, and linguistics. There are two additional, related characteristics of the classical approach to categorization. Since categories are defined in terms of necessary and sufficient conditions and they have clearly demarcated boundaries, defining properties are claimed to be binary in the sense that they are ‘a matter of all or nothing’ (Taylor : ). Thus, an entity does or does not possess a given defining property. There is no middle ground. Otherwise, membership in a category cannot be an either–or matter. Moreover, since every member of a given category must meet the necessary and sufficient conditions on membership, all members of that category have equal status. No members are better (or worse) instances of the category than other members. The classical (or Aristotelian) approach had been more or less the dominant view on the nature of human categorization until the s when experimental evidence (e.g. Labov ; Rosch a, b, a, b) began to emerge, calling the classical approach into serious question. In particular, the evidence strongly suggested the following: (i) categories are not unordered sets of members—each member meeting defining properties—but rather they are organized around salient or typical members; (ii) properties that are not necessary for category membership need to be taken into account in order to determine category membership; and (iii) categories may not necessarily have clearly demarcated boundaries.

We will examine these in turn by using two of the most cited examples in categorization research (e.g. Labov ; Rosch b): the category of birds and the category of household receptacles. First, in the classical approach to categorization, the  category may be defined by at least three properties, namely -, , and  (i.e. birds defined as two legged, feathered animals). Note that the ability to fly is not a necessary defining property of the  category, because there are birds that cannot fly, e.g. kiwis, ostriches, and penguins, or because there are animals that can fly but are not birds (e.g. bats, flying 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 2 C L A S S I C A L V S P R O T O T Y P E A P P R O A C H T O C A T E G O R I Z A T I O N

squirrels, flies). When, however, asked to provide examples of the  category, people respond in such a way that they give robins or sparrows as better representatives of the  category than penguins or kiwis. In other words, the members of the  category do not seem to have equal status: some birds are more bird-like than others. This, in turn, suggests that, contrary to what the classical approach claims, categories are not unordered sets of members but instead have an internal structure, that is, some members being typical/central instances of the category, and other members being marginal/peripheral instances of the category. In other words, category membership is not an either–or matter but a matter of gradience (i.e. degrees of membership). Second, when people are asked to name typical examples of the  category, their responses are not based on necessary defining properties. Typical examples of the  category, as opposed to peripheral ones, all seem to have the ability to fly. However, the ability to fly, in the context of the classical approach, is not a defining property of the  category (because, as already pointed out, there are birds that cannot fly, e.g. kiwis, ostriches, and penguins). This suggests that properties not necessary for, but typical of, category membership must (also) be taken into consideration when we investigate the way people categorize things in their environment. Third, when asked to categorize household receptacles of different shapes and sizes, people may categorize one and the same thing as a glass or a vase, for instance, depending on what it contains, e.g. a beverage vs flowers. This situation is different from members of the  category, all of which—as subcategories of the  category— seem to have clear-cut boundaries. For instance, an ostrich cannot be categorized as an ostrich or as a penguin. What this entails is that not all categories may have clearly demarcated boundaries. Taylor (: ) suggests that many natural entities (e.g. birds) have clear-cut boundaries but cultural artefacts (e.g. household receptacles) may not necessarily do so. The existence of categories without clearly demarcated boundaries also poses a serious theoretical problem for the classical approach to categorization. What has emerged from the experimental evidence discussed above points to an alternative, psychologically valid approach to categorization, as the classical approach clearly fails to capture the way people categorize things. This alternative approach has come to be known as prototype theory—with the most salient member of a category called 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

the prototype of that category. The thrust of prototype theory can be summarized (e.g. Taylor ; van der Auwera and Gast ): (i) category membership is a matter of gradience, with members ranging from typical to peripheral (i.e. categories have an internal structure, with unequal membership); (ii) category membership is determined on the basis of typical or salient properties, not on the basis of necessary and sufficient properties; and (iii) categories may not necessarily have clearly demarcated boundaries; that is, they may have flexible boundaries.

In prototype theory, categories are claimed to be organized around the most salient instances, known as prototypes, which ‘serve as [cognitive] reference points for not-so-clear [read: peripheral or marginal] instances’ (Taylor : ; cf. Rosch a). Prototypes may not be recognized on the basis of necessary and sufficient properties but rather typical or salient properties come to the fore in the minds of people when they categorize things in their environment. Moreover, one and the same object may be categorized as a member of one category (e.g. glass) or as a member of another category (e.g. vase), depending on the situation or context. Why do some categories then not have clear-cut boundaries? The answer seems to be, as Geeraerts (: ) puts it succinctly: the categorial system can only work efficiently if it does not change drastically any time new data crop up. But at the same time, it should be flexible enough to adapt itself to changing circumstances. To prevent it from being chaotic, it should have a built-in tendency towards structural stability, but this stability should not become rigidity lest the system stops being able to adapt itself to the ever-changing circumstances of the outside world.

8.3 Prototype category as a network of similarities As discussed in §., categories have an internal structure, with members ranging from typical/central to marginal/peripheral (i.e. degree of membership), and moreover, categories do not necessarily have clearly demarcated boundaries. One way of making sense of these two characteristics is to invoke the concept of ‘family resemblance’. This 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 3 P R O T O T Y P E C A T E G O R Y A S A N E T W O R K O F S I M I L A R I T I E S

concept came originally from the Austrian-British philosopher Ludwig Wittgenstein (–), who commented on the German word Spiel ‘game’, which is, very much as the English word game, used to refer to a diverse range of games, including board games, card games, ball games, and Olympic games (and computer games if Wittgenstein were alive today!). As Wittgenstein ( []: –) observes, these different games clearly do not share a set of defining properties that distinguish them from non-games. We are indeed hard pressed to come up with one common defining property that puts together the many different games into one category—perhaps except that people play them but then people also play musical instruments. For instance, some games are played by individuals (e.g. golf) while others are played by teams (e.g. soccer); some games are played for enjoyment (e.g. patience or solitaire) while others are (also) played competitively (e.g. golf, soccer); some games are learned with no or little training (e.g. noughts and crosses, and Scrabble) while others can only be learned through years of training (e.g. Olympic games); some games require mental activity (e.g. chess, Scrabble) while others require physical activity (e.g. shot put, bras de fer). Moreover, properties seem to cut across different games. For instance, games, regardless of whether they require mental or physical activity, may be played competitively as well as for enjoyment, e.g. chess and tennis. Also compare soccer and golf. Soccer is played by teams while golf is played by individuals. Nonetheless, these games are both played competitively (i.e. with score being kept). Thus, Wittgenstein suggests that what we are dealing with here is a family of games. Within a biological family, parents and children have resemblances—in an ‘overlapping and criss-crossing’ manner—to each other, e.g. in terms of hair colour, eye colour, complexion, build, gait, temperament. Similarly, the different games can be thought of as a family, since they constitute a network of similarities. Wittgenstein’s metaphorical concept of family resemblance helps us understand degree of category membership in a better light. Members of a given category do not have a set of common defining properties; rather, membership in a category is a matter of gradience, as has been discussed. This continuum-like characteristic of a category can now be interpreted in terms of family resemblance in that members of a category have different degrees of similarity to each other, just as members of a biological family do. For instance, penguins are similar to other members in the ‘bird family’ in that they have plumage, beaks, and wings—albeit the last used for swimming instead of flying—and they 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

also lay eggs. Ostriches may be even more similar to typical birds in that they have an S-like body shape, not to mention the other avian properties, e.g. plumage, beaks, wings, and the ability to lay eggs. In other words, members in the  category have different degrees of similarity to each other. Recall that the reason why sparrows are taken to be better representatives of the  category than penguins is that certain properties, e.g. the ability to fly, are taken to be one of the most salient ones for the category in question. Moreover, the concept of family resemblance helps us understand why one and the same thing may belong to more than one category, e.g. one and the same receptacle categorized either as a glass or as a vase. This categorial ambivalence, as it were, points to the absence of clearly demarcated boundaries for certain categories (especially in the case of cultural artefacts). Properties that may be salient for one category may be peripheral properties of another category, or conversely, properties that are peripheral for one category may be salient for another category. What this entails is that what may be identified as a peripheral instance of a glass may be regarded as a prototypical instance of a vase or vice versa. We will return to family resemblance in §., where the concept of semantic maps will be discussed.

8.4 Prototype in grammar While the initial impetus for prototype theory came from (psychological) research into categorization of natural entities and cultural artefacts, linguists who embraced the prototype approach wasted little time in extending it not only to (more) abstract concepts, e.g. murdering, telling a lie, tallness, but also to grammatical categories, e.g. word classes, tense (Taylor : –; van der Auwera and Gast : –). Probably the best-known cross-linguistic study exploring prototype in grammar is Hopper and Thompson’s () work on transitivity (see Hopper and Thompson ; Næss ; Kittilä ). In this section, we will briefly survey this classic typological work with a view to illustrating how prototype theory can be extended to grammatical categories on a cross-linguistically and empirically robust basis. Traditionally, transitivity is understood ‘as a global property of an entire clause’ (Hopper and Thompson : ). This global property necessarily involves at least two participants, namely an agent and a patient, and the carrying-over or transferring of an activity 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8.4 P R O TO T Y P E I N G R A M M AR

from an agent to a patient. For instance, (a) is a transitive clause, as the activity (killing) is transferred from an agent (the intruder) to a patient (the guard). In contrast, the intransitive clause in (b) involves no such transferring of an activity, as there is only one participant, to begin with. ()

a. The intruder killed the guard. b. The intruder cried.

By the traditional definition of transitivity, clauses such as those in () are not regarded as transitive. ()

a. The bus driver shouted at the old man. b. The teacher resembles the actor.

Though they have two participants each, (a) and (b) involve no transferring of an activity from one participant to the other, because in (a) the old man is merely a person whom the bus driver’s shouting was directed towards (e.g. the old man may not have even heard the bus driver shout), and in (b), resemblance is not an action but a state (i.e. a physical similarity; cf. The teacher bears a resemblance to the actor). In their cross-linguistic study, Hopper and Thompson () make a clean break with the traditional view of transitivity by proposing that transitivity is not a global property but rather a cluster of properties, as listed in Table .. Table 8.1 Properties of transitivity (Hopper and Thompson : ) High

Low

a. PARTICIPANTS

two or more participants

one participant

b. KINESIS

action

non-action

c. ASPECT

telic

atelic

d. PUNCTUALITY

punctual

non-punctual

e. VOLITIONALITY

volitional

non-volitional

f. AFFIRMATION

affirmative

negative

g. MODE

realis

irrealis

h. AGENCY

A high in potency

A low in potency

i. AFFECTEDNESS OF P

P totally affected

P not affected

j. INDIVIDUATION OF P

P highly individuated

P non-individuated

NB: A = Agent; P = Patient; Hopper and Thompson () use O instead of P, following Dixon ().



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

Note that the property in (j), individuation of P, itself is a cluster of properties, as in (). ()

Individuated proper human, animate concrete singular count referential, definite

Non-individuated common inanimate abstract plural mass non-referential

In Hopper and Thompson’s prototype analysis, the global property of transitivity is ‘deconstructed’, as it were, and conceptualized as a bundle of ten properties, ‘each [property] focusing on a different facet of [the carrying-over of an activity from an agent to a patient] in a different part of the clause’ (Hopper and Thompson : ). Moreover, the more properties a clause has in the high columns in the properties from (a) to (j) in Table ., the more transitive it is; conversely, the more properties a clause has in the low columns in the properties from (a) to (j) in Table ., the less transitive it is. In other words, transitivity is not an either–or matter but membership in the category of transitivity is a matter of gradience; some clauses are typical instances of transitivity while other clauses are marginal instances of transitivity (i.e. more or less transitive). Thus, the properties of transitivity in Table . are not regarded as necessary and sufficient, as would be the case in the classical approach to categorization. What this entails is that different languages may, as it were, pick any one or more of the properties as the most salient for transitivity; depending on languages, clauses lacking in some of the properties of transitivity may still be treated as transitive, because they possess the most salient property (or properties) of transitivity. Each of the transitivity properties in Table . constitutes a scale along which clauses can be ranked or compared. (Also note that each property of transitivity in Table . is a prototypical category in its own right.) For instance, a referential, human patient, the guard in (a), will be more patient-like than a non-referential, animate patient, deer in (b): () a. The intruder killed the guard. b. Mr Westcoast hunted deer for a living. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8.4 P R O TO T Y P E I N G R A M M AR

Put differently, an action can be seen ‘as more effectively transferred to a patient which is individuated than to one which is not’ (Hopper and Thompson : ). Thus, (a) is more transitive than (b). Moreover, Hopper and Thompson make the claim that if two (or more) properties co-occur in the morphosyntax or semantics of a clause, they must always be on the same side of the high–low transitivity scale. For instance, if two properties, e.g. aspect and individuation of P, co-occur in the morphosyntax or semantics of a clause, P of a telic verb (i.e. denoting a completed action) must be coded as referential and/or definite, while P of an atelic verb (i.e. denoting an action partially carried out) must be non-referential and/or indefinite. The converse situation, that is, P of a telic verb is non-referential and/or indefinite, while P of an atelic verb is referential and/or definite, will not be attested in the world’s languages. Lastly, just like other prototypical categories, the grammatical category of transitivity does not have clearly demarcated boundaries. Consider the sentences in () and (), reproduced together here. ()

a. b. c. d.

The intruder killed the guard. The intruder cried. The bus driver shouted at the old man. The teacher resembles the actor.

One of the standard tests for transitivity in English is passivization: transitive clauses have passive counterparts while intransitive clauses lack such passive counterparts. This is what would be expected under the classical approach to transitivity: either passive counterparts (= transitive) or no passive counterparts (= intransitive), and no middle ground. However, the ability to have a passive counterpart does not seem to be an either–or matter, because it depends on the degree of transitivity manifested in a given clause, regardless whether or not a clause contains an agent and a patient. Clearly, (b) does not have a passive counterpart, as in (b), while (a) does, as in (a). ()

a. The guard was killed by the intruder. b. *Was cried by the intruder.

But consider the clause in (c), which seemingly involves two participants. While the bus driver is an agent, the old man is not a patient. In fact, the old man is not even necessary, or technically speaking, it is an adjunct expression (i.e. optional). Thus, it is possible to leave out the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

whole phrase at the old man without causing ungrammaticality, as in (a). This contrasts with (a), which does not form a grammatical sentence without P, as in (b). () a. The bus driver shouted. b. *The intruder killed. Nonetheless, (c) can have a passive counterpart, as in (). () The old man was shouted at by the bus driver. Now, consider the sentence in (a), which is similar to (c) in that it also contains an adjunct phrase at the party, the optionality of which is illustrated in (b). () a. The girl sang at the party. b. The girl sang. However, (a), unlike (c), does not have a passive counterpart, as in (). ()

*The party was sung at by the girl.

What makes it even more interesting is that the sentence in (d) cannot do without the second participant (the actor), as shown in (a), but it does not have a passive counterpart, as shown in (b). ()

a. *The teacher resembles. b. *The actor is resembled by the teacher.

What emerges from the foregoing discussion about the ability to have a passive counterpart is that the grammatical category of transitivity does not seem to have clear-cut boundaries to the effect that not all intransitive clauses lack passive counterparts. Moreover, some clauses such as (d) do not have passive counterparts either, the presence of two participants notwithstanding. The reason why the passive in () is grammatical while the passive in () is not has something to do with the fact that the second participant in (), the old man, can be seen to have been affected by the other participant’s (i.e. the bus driver’s) action, as people do sometimes get (emotionally) affected by someone shouting at them. In other words, the active counterpart of (), i.e. (c), embodies a number of transitivity properties, kinesis (i.e. an action), volitionality (i.e. volitional), and, more significantly, affectedness of the non-agent (i.e. the old man affected by the bus driver’s shouting). In contrast, although it is similar to () in terms 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8.4 P R O TO T Y P E I N G R A M M AR

of the other transitivity properties, e.g. kinesis, volitionality, (a) could not be lower on the affectedness of P, since an event (i.e. the party) cannot be seen to have undergone any change as a consequence of the agent’s (i.e. the girl’s) activity. To make matters more interesting, some clauses may or may not have passive counterparts (i.e. transitive or intransitive), depending on how they are interpreted. For instance, the active clause in (a) is ambiguous while the passive counterpart in (b) is not. ()

a. Jessica fought with Angelina. b. Angelina was fought with by Jessica.

The active sentence in (a) means either (i) ‘Jessica and Angelina fought against each other’ or (ii) ‘Jessica and Angelina fought (as a team) against their common foe’. In contrast, the passive sentence in (b) has only one reading: ‘Jessica and Angelina fought against each other’. Note that it is the first, not the second, reading of (a) that has a sense of the affectedness of P (i.e. Angelina being affected by Jessica’s action, e.g. bruises); in the second reading of (a), the person against whom both Jessica and Angelina fought (i.e. the patient) is not even expressed. Note the two readings are identical in terms of whatever other transitivity properties they may possess, e.g. kinesis, aspect. But it is the presence of the affectedness of P that explains why only the first reading of (a) is retained in the passive in (b). This situation, i.e. one and the same clause having or lacking a passive counterpart, depending on its interpretation (i.e. context), is analogous to one and the same receptacle being categorized as a glass or a vase, depending on the context, as discussed in §., i.e. flexible category boundaries. Hopper and Thompson’s analysis of transitivity leads to the following conclusions: (i) membership in the category of transitivity is a matter of gradience (clauses are more or less transitive, not either transitive or non-transitive (= intransitive)); (ii) membership in the category of transitivity is determined on the basis of typical or salient properties, not on the basis of necessary and sufficient properties; and (iii) the category of transitivity has flexible, instead of clearly demarcated boundaries.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

These are precisely the characteristics of prototypical category, as discussed in §., in relation to categories such as household receptacles. Clearly, transitivity is a prototypical category. In the remainder of this section, we will briefly discuss how prototype effects of transitivity manifest themselves in the world’s languages, particularly in the grammatical domains of verb morphology, case marking, and word order. In the interests of space, we will focus on three of the properties listed in Table ., namely individuation of P, affectedness of P, and aspect, exemplifying how these properties are singled out by individual languages as the most salient properties of transitivity (for further discussion, see Chapter ). First, in many languages P is coded (or technically, case-marked) for its role only when it refers to a highly individuated entity, for instance, human/animate, referential, and/or definite. For instance, Spanish codes P with a when P is human (or at least human-like), as in (). ()

Spanish (Romance; Indo-European: Spain) a. Busco mi coche seek.SG my car ‘I’m looking for my car.’ b. Busco a mi hija seek.SG OBJ my daughter ‘I’m looking for my daughter.’

Moreover, it is not just humanness that triggers this P coding; P must also be referential. ()

Spanish (Romance; Indo-European: Spain) a. María quiere mirar un bailarín Maria want.SG watch a ballet.dancer ‘Maria wants to watch a(ny) ballet dancer.’ b. María quiere mirar a un bailarín Maria want.SG watch OBJ a ballet.dancer ‘Maria wants to watch a (specific) ballet dancer.’

Another language that behaves similarly to Spanish is Hindi, which codes P with the suffix -koo only when it is animate and definite, as opposed to inanimate or indefinite. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8.4 P R O TO T Y P E I N G R A M M AR

() Hindi (Indic; Indo-European: India) a. Machuee-nee machlii pakRii fisherman-ERG fish caught ‘The fisherman caught a fish.’ b. Machuee-nee machlii-koo pakRaa fisherman-ERG fish-KOO caught ‘The fisherman caught the fish.’ It is not just P coding that is systematically correlated with the transitivity property of individuation of P. In Hungarian, for instance, P behaves differently in terms of word order, depending on its referential status. Thus, non-referential P is placed immediately before the verb whereas referential P is positioned after the verb. () Hungarian (Ugric; Uralic: Hungary) a. Péter újságot olvas Peter paper reads ‘Peter is reading a newspaper.’ b. Péter olvas egy újságot Peter reads a paper ‘Peter is reading a (specific) newspaper.’ Moreover, when P is both referential and definite, its presence is indicated on the verb by the so-called objective conjugation (i.e. olvassa in () vs olvas in ()). In other words, the referential/definite status of P has a bearing on verb morphology as well. () Hungarian (Ugric; Uralic: Hungary) Péter olvassa az újságot Peter reads(OBJ) the newspaper ‘Peter is reading the newspaper.’ There are a number of languages in which the property of individuation of P is manifested similarly in verb morphology. In Chukchi, nonreferential P is incorporated into the verb stem (i.e. P becoming part of the verb), whereby the vowels of the incorporated P must participate in word-internal vowel harmony rules in conjunction with the host verb (e.g. kopra in (b), as opposed to kupre in (a)). Moreover, with P incorporated, the verb must take the intransitive suffix -gʔat, as in (b), in lieu of the transitive suffix (i.e. -ən in (a)). 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

()

Chukchi (Northern Chukotko-Kamchatkan; Kamchatkan: Russia) a. tumg-e na-ntəwat-ən kupre-n friends-ERG PL.SG-set-TR net-ABS ‘The friends set the net.’ b. tumg-ət kopra-ntəwat-gʔat friends-NOM net-set-INTR ‘The friends set nets.’

Chukotko-

Note that the agent tumg ‘friends’ is also case-marked differently, depending on whether the P is incorporated into the verb or not. Thus, when P stands on its own, as in (a), the agent is coded by the ergative case suffix, as is characteristic of transitive clauses, while in (b) the same agent is coded by the ‘nominative’ case, as is typical of intransitive clauses. Somewhat similarly, in Kusaiean indefinite P forms a morphological unit with the verb, very much like incorporated P in Chukchi. Also note that different verb stems, albeit phonologically similar, have to be used, depending on whether P is definite or indefinite (i.e. ɔl in (a) vs owo in (b)). ()

Kosraean (Oceanic; Austronesian: Micronesia) a. nga ɔl-læ nuknuk ɛ I wash-COMPL clothes the ‘I finished washing the clothes.’ b. nga owo nuknuk læ I wash clothes COMPL ‘I finished washing clothes.’

Note that in (b) the completive aspect marker læ appears after the sequence of the verb and indefinite P, while in (a) the same aspect marker is suffixed to the verb, which, in turn, appears before definite P. This relative position of the aspect marker clearly points to the verb and indefinite P constituting a morphological unit in (b). Second, in some languages the property of affectedness of P is evident in verb morphology. For instance, in Trukese, the distinction between totally affected and partially affected P is reflected in the choice of verb stems, that is, transitive verb stems for totally affected P (e.g. wúnúmi in (a)), and intransitive verb stems for partially affected P (e.g. wún in (b)). 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8.4 P R O TO T Y P E I N G R A M M AR

() Trukese (Oceanic; Austronesian: Micronesia) a. wúpwe wúnúmi ewe kkónik I.will drink the water ‘I will drink up the water.’ b. wúpwe wún ewe kkónik I.will drink the water ‘I will drink some of the water.’ This kind of differential treatment of totally and partially affected P is prevalent, albeit in terms of P case marking, in eastern and northeastern European languages such as Estonian (Finnic; Uralic: Estonia), Finnish (Finnic; Uralic: Finland), Hungarian (Ugric; Uralic: Hungary), Latvian (Baltic; Indo-European: Latvia), Lithuanian (Baltic; IndoEuropean: Lithuania), Polish (Slavic; Indo-European: Poland), and Russian (Slavic; Indo-European: Russia). In Hungarian, totally affected P is coded in the accusative case, while partially affected P is coded in the partitive case. () Hungarian (Ugric; Uralic: Hungary) a. olvasta a könyvet read.s/he.it the book-ACC ‘S/He read the book.’ b. olvasott a könyvöl read.s/he the book-PRTV ‘S/He read some of the book.’ A similar situation is attested in Russian, as shown in (); totally affected P and partially affected P bear the accusative and genitive case, respectively. () Russian (Slavic; Indo-European: Russia) a. peredajte mne xleb pass me bread-ACC ‘Pass me the (whole) bread.’ b. peredajte mne xleba pass me bread-GEN ‘Pass me some (of the) bread.’ Finally, in many languages aspect also shows a strong correlation with the transitivity of the verb. For instance, in Finnish whether a clause is interpreted as perfective (i.e. action viewed as completed) or 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

imperfective (i.e. action viewed as uncompleted) depends on the coding of P, i.e. accusative vs partitive. The accusative case stands for totally affected P, and the partitive case for partially affected P. In other words, a clause with totally affected P is more transitive than a clause with partially affected P. This degree of transitivity, in turn, is reinforced by the way a clause with totally affected P is interpreted as perfective and a clause with partially affected P as imperfective, as can be seen in: ()

Finnish (Ugric; Uralic: Finland) a. Liikemies kirjoitti kirjeen valiokunnalle businessman wrote letter(ACC) committee-to ‘The businessman wrote a letter to the committee.’ b. Liikemies kirjoitti kirjettä valiokunnalle businessman wrote letter(PRTV) committee-to ‘The businessman was writing a letter to the committee.’

Note that in () the verb itself does not have any indication, morphological or otherwise, as to whether it should be interpreted as perfective or as imperfective. Indeed, the same verb form is used in both (a) and (b). Thus, the choice between the perfective and imperfective interpretation is determined precisely by the coding of P, accusative vs partitive. Similar to Finnish is Kalkatungu, which exhibits the correlation between aspect and the coding of P. ()

Kalkatungu (Northern Pama-Nyungan; Pama-Nyungan: Australia) a. kupaŋuru-ṭu caa kalpin-Ø ḻai-ṉa old.man-ERG here young.man-ABS hit-PST ‘The old man hit the young man.’ b. kupaŋuru-Ø caa kalpin-ku ḻai-miṉa old.man-ABS here young.man-DAT hit-IPFV ‘The old man is hitting the young man.’

In (a), the action is perceived to be successfully completed (hence totally affected P), while in (b) the action is still being carried out (hence partially affected P). In (a), the agent is coded in the ergative case, and P coded in the absolutive case, as is the case in transitive clauses. In contrast, (b) is an intransitive clause, where the agent is coded in the absolutive case, and P in the dative case. This correlation between aspect and P coding makes sense in that a completed action is likely to involve totally affected P while an uncompleted action entails 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 5 S E M AN T I C M AP S : ‘G E O G R A P H Y O F G R A M M AT I C A L M E AN I N G’

partially affected P (i.e. partially affected because it is still in the process of being affected by the agent’s action). 8.5 Semantic maps: ‘geography of grammatical meaning’ In §., the concept of family resemblance—recall Wittgenstein’s analogy with games—was invoked in order to better understand the degree of membership in a prototypical category (and also the lack of clear-cut category boundaries). Thus, members of a category are interpreted as constituting a network of similarities; members do not meet necessary conditions but may instead coexist in a chain of ‘overlapping and criss-crossing’ similarity relations. In this kind of similarity network, members may share properties with neighbouring members but do not necessarily have anything in common with non-neighbouring members; members located at one end of the chain may have nothing in common with those located at the other end, but they are still connected indirectly by other members between them within the chain. This concept of family resemblance has recently provided an insight into how to represent the network of multiple functions or meanings of grammatical expressions, be they morphemes, words, or constructions.1 For instance, a single grammatical expression, e.g. preposition to in English, may perform multiple functions, each of which constitutes a smaller conceptual category in its own right, e.g. direction, recipient, purpose. These smaller conceptual categories, in turn, may have varying degrees of similarity or family resemblance to each other, within the larger domain of the preposition to. Based on the concept of family resemblance, a useful method of capturing the similarity network of multiple functions has been developed under the name of semantic maps. A semantic map— which also goes by different names, e.g. a mental, cognitive, or implicational map, or a conceptual space—is a geometrical object in which multiple functions of a given grammatical expression are plotted on a map, displaying their conceptual similarities in a clear manner (Haspelmath : ). One simple yet good example of what semantic maps look like is provided in Figure . (based, together with the ensuing illustration, on Haspelmath’s (: –) discussion of the so-called ‘dative’ functions). 1

It should be noted that the semantic map method can also be used to capture multiple senses or meanings of lexical items across languages (Haspelmath : –).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

predicative possessor

external possessor

direction

recipient

beneficiary

purpose

experiencer

English to judicantis

French à

Figure 8.1 A semantic map of ‘dative’ functions in English and French

In Figure ., the rectangular (unbroken line) box encloses the ‘dative’ functions coded by the English preposition to, while the T-shaped (dashed-line) box groups together the ‘dative’ functions coded by the French preposition à.2 The various ‘dative’ functions are linked by means of straight connecting lines.3 The ‘dative’ functions of English to are exemplified in (), and the ‘dative’ functions of French à in ().

2 Contrary to Haspelmath (: ), the French preposition à seems to code the external possessor function also, as illustrated in (), as opposed to (). () French (Romance; Indo-European: France) J’ai coupé les cheveux à Marie I cut the hair to Marie ‘I cut Marie’s hair.’ () French (Romance; Indo-European: France) J’ai coupé les cheveux de Marie I cut the hair of Marie ‘I cut Marie’s hair.’ 3 The beneficiary function is coded by a different preposition in English, as in: () Kate baked a cake for/*to Keith. The external possessor function is exemplified by the French example in () in n.  or:

()

Spanish (Romance; Indo-European: Spain) Le robaron el auto a Guillermo .IO stole-PL the car to William ‘They stole William’s car.’ Lastly, an example of the judicantis (or so-called judger’s dative) function comes from German, where the ‘judger’ expression appears in the dative case: () German (Germanic; Indo-European: Germany) Das ist mir zu warm that is SG.DAT too warm ‘That’s too warm for me.’



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 5 S E M AN T I C M AP S : ‘G E O G R A P H Y O F G R A M M AT I C A L M E AN I N G’

() English (Germanic; Indo-European: United Kingdom) a. Jane walked to the park. [direction] b. Jane gave the book to her friend. [recipient] c. The idea seems bizarre to me. [experiencer] d. Jane left home early to catch the morning train. [purpose] () French (Romance; Indo-European: France) a. Mes amis iront à Paris [direction] my friends will.go to Paris ‘My friends will go to Paris.’ b. Marie donnera le livre à la femme [recipient] Marie will.give the book to the woman ‘Marie will give the book to the woman.’ c. L’homme a paru à Marie malade the.man AUX looked to Marie ill ‘The man looked ill to Marie.’ d. Ce chien est à moi this dog is to me ‘This dog is mine.’

[experiencer]

[predicative possessor]

On the semantic map, similarity among the functions is indicated by spatial adjacency: spatial proximity represents conceptual closeness of functions. For instance, in Figure . direction is more closely related conceptually to recipient than to beneficiary, although direction and beneficiary are indirectly related to each other (through the intermediary of recipient). Thus, direction is positioned more closely to recipient on the semantic map than to beneficiary. Moreover, similarity among the functions is graphically indicated by connecting lines. Where no direct connecting line is drawn between two functions (e.g. purpose and recipient), there is no direct conceptual relatedness between them. Also note that left–right or top–bottom orientation of the functions on the semantic map has no significance. Semantic maps are constructed strictly on the basis of crosslinguistic comparison. This is done in two stages. First, every function on a semantic map is recognized and justified only if there are at least two languages that differ with respect to the function. What this entails is that if a function, albeit a logical possibility, is not supported by such cross-linguistic evidence, there is no justification for recognizing that function. For instance, English and French both code direction and 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

recipient with one and the same preposition (i.e. to in English and à in French). Thus, in order to recognize direction and recipient as separate functions, we need at least one language that distinguishes between the two functions by using different grammatical elements to code them. Indeed, German provides the justification for keeping direction and recipient separate, as it employs zu or nach for direction, and the dative case for recipient. Moreover, cross-linguistic comparison is crucially important for purposes of ascertaining whether functions are related to each other within one and the same conceptual category. If a given grammatical element in one language carries out functions x, function y, and function z, this may well be but a coincidence, but if the three functions are performed by a single grammatical element in language after language, we are dealing with conceptual relatedness among them (i.e. the criterion of measure of recurrence in §.). Second, functions on a semantic map must be arranged in such a way that they cover a contiguous segment of the semantic map. Once again, this is carried out on the basis of cross-linguistic evidence. For purposes of simple illustration, let’s focus on three functions featured in Figure ., purpose, direction, and recipient. These functions can be arranged in any of the following three ways: ()

a. purpose——direction——recipient b. direction——purpose——recipient c. direction——recipient——purpose

The idea of arranging the functions in a contiguous manner on the semantic map is to indicate that they must be coded likewise, in a contiguous manner, by one and the same grammatical element. For instance, consider (a), which predicts that no languages code purpose and recipient without also coding direction by means of one and the same grammatical element. While the three arrangements in () are all able to capture the fact that the English preposition to codes all the three functions, (b) must be ruled out when the French data, as in (), are taken into account. In particular, the French preposition à does not code purpose, as in (), although it codes direction and recipient (a, b). ()

French (Romance; Indo-European: France) *J’ai quitté la fête tôt à arriver à la maison en bon temps I left the party early to arrive to the home in good time ‘I left the party early to get home in time.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 5 S E M AN T I C M AP S : ‘G E O G R A P H Y O F G R A M M AT I C A L M E AN I N G’

The arrangement in (c) will also have to be eliminated because there are languages that use one and the same grammatical element to code direction and purpose without coding recipient as well. In German, for instance, the preposition zu codes both direction and purpose but not recipient; for the expression of recipient, as already pointed out, German uses the dative case (i.e. dem Mann in (c)), as in: () German (Germanic; Indo-European: Germany) a. Anna ging zum Spielen in den Garten [purpose] Anna went to play in the garden ‘Anna went into the garden to play.’ b. Ich gehe zu Anna [direction] I go to Anna ‘I am going to Anna’s place.’ c. Die Frau gab dem Mann das Buch [recipient] the.NOM woman gave the.DAT man the.ACC book ‘The woman gave the book to the man.’ The foregoing data point to the arrangement in (a) as the one to be incorporated into the semantic map in Figure .. Needless to say, the semantic map in Figure .—and other semantic maps so far proposed for that matter, as in e.g. perfect aspect (Anderson ), reflexives and middles (Kemmer ), intransitive predication (Stassen ), pronouns (Haspelmath ; Cysouw ), modality (van der Auwera and Plungian ), depictive adjectivals (van der Auwera and Malchukov )—must be tested, and revised or falsified on the basis of far more cross-linguistic evidence. There are three important points to be made about semantic maps. First, the fact that functions must cover a contiguous area on a semantic map provides the basis of generating implicational statements. For instance, consider again (a), which represents one contiguous portion (i.e. sub-map) of the semantic map in Figure .. The contiguity of purpose, direction, and recipient on the semantic map points to a possible implicational statement that if a language uses a grammatical element to express purpose and recipient, that grammatical element must also code the intervening function, namely direction (Haspelmath : –). In other words, on the basis of the portion of the semantic map in question, we can formulate a testable hypothesis that no language uses one and the same grammatical element to code 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

purpose and recipient without coding direction as well. The rest of the semantic map can also be interpreted productively in a similar manner. Thus, ‘each semantic map embodies a series of implicational [statements]’ (Haspelmath : ). This characteristic of a semantic map is generally subsumed under the Function Contiguity Hypothesis (Blansitt : ) or, specifically, the Semantic Map Connectivity Hypothesis (Croft : ): in the case of semantic maps such as the one in Figure ., functions are coded identically only if the identically coded functions cover a contiguous area on a semantic map. Second, the Function Contiguity Hypothesis also allows us to draw testable inferences about the direction of language change. For instance, given the semantic map in Figure ., if a direction expression is extended to predicative possessor, we can deduce that it must already have been extended to the expression of recipient, because, according to the Function Contiguity Hypothesis, it is, in principle, not possible to ‘skip’ over the intervening function, i.e. recipient. In other words, a grammatical expression can only be extended in terms of functions in an incremental or step-by-step manner, e.g. from direction to recipient to predicative possessor, not from direction directly to predicative possessor, bypassing recipient. Based on this kind of reasoning, van der Auwera and Plungian (: ) propose a number of possible directions of change based on their semantic map of modality, e.g. a possible change from participant-internal to participant-external modality vs an impossible change from epistemic possibility to participant-external modality.4 Typically, arrows are added to functions to indicate the 4

Participant-internal modality refers to a possibility or necessity internal to a participant engaged in a given state of affairs, as exemplified by can or need in (). () a. John can get by with sleeping three hours a night. b. Tim needs to sleep eight hours a night. Participant-external modality, in contrast, refers to circumstances that are external to a participant engaged in a given state of affairs, and render that state of affairs possible or necessary, as illustrated by can or have to in () () a. To get to the museum, you can take bus . b. To get to the museum, you have to take bus . Epistemic possibility refers to a judgement of the speaker as to whether a proposition is uncertain or probable, as illustrated by may in (). () John may have arrived. In (), the speaker regards John’s arrival as uncertain. This uncertainty relates to the possibility of John’s arrival because John’s arrival is judged possible as opposed to other judgements, e.g. John is an unreliable person. For further detailed discussion, refer to van der Auwera and Plungian’s () work on the semantic map of modality.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 5 S E M AN T I C M AP S : ‘G E O G R A P H Y O F G R A M M AT I C A L M E AN I N G’

predicative possessor

external possessor

direction

recipient

beneficiary

purpose

experiencer

judicantis

Figure 8.2 A semantic map of ‘dative’ functions with directions of change

direction of change explicitly, as in Figure . (based on Haspelmath : ). Needless to say, such inferred directions of change must be verified against historical data, although, admittedly, not many languages have a sufficient amount of historical-linguistic data for verification (van der Auwera : –). For instance, three of the connecting lines in Figure . are without arrow heads, which suggests that the direction of change involving them remains to be determined. Third, there is one major weakness of the semantic map method that needs to be highlighted, albeit briefly: the issue of distance between connected functions. Semantic maps, at least in their standard form (e.g. Haspelmath ), do not differentiate semantic links between different functions in terms of (quantifiable) similarity. That is, each connecting line on the semantic map in Figure . or Figure . represents the existence of conceptual relatedness between two functions without showing exactly how similar they are to each other (i.e. degree of conceptual relatedness). All that each connecting line indicates is the existence of conceptual relatedness between two functions involved. Take direction and purpose on the one hand, and recipient and beneficiary on the other, in Figure . or Figure .. Though the connecting line between direction and recipient is much longer than that between direction and purpose, the difference in distance between these two pairs of connected functions means nothing. The connecting line between direction and recipient in Figure . or Figure . is rendered longer than that between direction and purpose so that the whole semantic map looks visually neat. In other words, the length of each connecting line (i.e. the distance between two connected functions) is non-significant.5 However, we 5

Of course, this is not to say that the semantic map completely fails to capture degree of conceptual relatedness between functions. For instance, in Figure . direction and



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

cannot assume that the semantic links between the different functions are necessarily of the same degree of similarity. Some linguistic typologists have indeed made proposals to eliminate this weakness of the semantic map method while retaining its original analytical value (e.g. Cysouw , , ; Croft and Poole ; and various articles assembled in Theoretical Linguistics . () and Linguistic Discovery . ()). Provided that we know how to measure similarity between connected functions in individual languages, e.g. based on the amount of shared formal coding or grammatical behaviour— admittedly, measuring similarity itself is also something to be worked out (see Cysouw : –; cf. §.)—the issue of distance between connected functions can be resolved in two possible ways. First, the frequency of occurrence or attestation—that is, how many of the sampled languages exhibit the semantic link between two given functions, e.g. between direction and recipient in Figure .—can be incorporated into the construction of a semantic map. The more languages exhibit the semantic link, the more conceptually similar the two functions are to each other (Cysouw : –; : ). The second way of measuring the distance between two directly connected functions is to quantify the degree of conceptual relatedness in individual languages, for instance, ‘’ being ‘similar’ and ‘’ being ‘dissimilar’, and then to calculate the aggregate similarity value for all languages in a sample (for further preliminary discussion, see Cysouw : –). Once we have quantified the distances between each pair of connected functions this way, we may, of course, also want to represent them in one visual form or another. One simple way to do this is to adopt the idea that the more similar two functions are to each other, the closer they should be plotted in a visual representation. For instance, Croft and Poole () and Cysouw (, ) propose that so-called Multidimensional Scaling (MDS) displays—an example of which is given in Figure . (Croft and Poole : , based on Haspelmath )—may be suitable for this purpose, because in MDS distances beneficiary are less conceptually related to each other than direction and purpose, and this difference in conceptual relatedness is mirrored by the greater distance between direction and beneficiary and the shorter distance between direction and purpose. This is because recipient comes in between direction and beneficiary while there is no intervening function between direction and purpose. What the semantic map fails to do is, however, to differentiate quantitatively the distance between one pair of two directly connected functions and the next one, because the length of each connecting line is non-significant.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

8. 5 S E M AN T I C M AP S : ‘G E O G R A P H Y O F G R A M M AT I C A L M E AN I N G’

.

free.ch .

spec.know compar

spec.unkn

irr.nonsp

.

dir.neg condit

indir.neg

–.

question –. –.

–.

.

.

.

Figure 8.3 An MDS display of indefinite pronouns

between functions are ‘indicative’ of their degree of (dis)similarity— very much as in the so-called Euclidean space where distance between points (= functions) represents degree of (dis)similarity. For instance, in Figure ., which is an MDS display of the various functions of indefinite pronouns, the distance between irr.nonsp (= irrealis non-specific) and spec.unkn (= specific unknown) is much shorter than that between irr.nonsp and question.6 This difference in distance is indicative of the difference in conceptual similarity between the two pairs of functions: the conceptual similarity between irr.nonsp and spec.unkn is much stronger than that between irr.nonsp and question. This also explains why no connecting lines appear between different functions in an MDS display (e.g. Figure .), as opposed to 6 These indefinite pronoun functions are illustrated by the following English examples, with indefinite pronouns underlined. For a full discussion of the various functions of indefinite pronouns, see Haspelmath (: –). [irr.nonsp] () Please say something.

()

I heard something, but I couldn’t tell what it was.

[spec.unkn]

()

Did anyone know about this?

[question]



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

on a semantic map (e.g. Figure .). In the MDS model, the use of connecting lines is simply otiose because distance between functions correlates non-trivially, in relative terms, with degree of conceptual (dis)similarity. Further advantages of using the MDS model in preference to the semantic map model are claimed to be that the former, unlike the latter, is mathematically well defined and computationally easy to implement with large datasets and with a large number of functions (Croft and Poole : , ). Cysouw (: ), however, sounds a note of warning that MDS should not be considered to be an improvement over the semantic map model, because MDS involves ‘a strong reduction of available information’. For instance, van der Auwera () points specifically to this kind of drawback when he argues that what may emerge as a direct semantic link between two functions in the MDS model may have been mediated by a third function that has since become obsolete or marginal—that is, the ‘direct’ semantic link in an MDS display may turn out to be merely indirect, that is, a historical fact. Moreover, different kinds of semantic relatedness, e.g. specialization or generalization of meaning, metonymy, or metaphor can be nicely captured in the semantic map model, but they may not be even captured in the MDS model. To wit, any representation that attempts to represent conceptual or semantic relatedness must be both diachronically and semantically informed (van der Auwera : ).

8.6 Concluding remarks The classical approach to categorization fails to work because it does not accord with the psychological reality of human categorization. The alternative, psychologically valid approach to categorization, namely prototype theory, relies on the concept of prototype: members of a category are organized around the most salient or typical instances of that category, i.e. prototype. Prototype theory has also been extended fruitfully to (more) abstract categories, including grammatical categories (e.g. transitivity); grammatical categories exhibit prototype effects in the same way natural entities or cultural artefacts do. Moreover, the concept of family resemblance has been discussed with a view to understanding degree of category membership as well as not-so-clear category boundaries. Also included in that discussion is the semantic map, a graphical representation of similarity relations among multiple 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

STUDY QUESTIONS

functions of grammatical expressions (or ‘the geometry of grammatical meaning’, as Haspelmath  puts it).

Study questions 1. Here is a list of household items: picture; wastebasket; cupboard; bookcase; stool; chair; rug; bureau; television; sofa; mirror; desk; vase; piano; magazine rack; divan; dresser; clock; heatpump; curtains; lamp; cushion; bed; videogame console (e.g. PlayStation, Xbox, Wii); table. (i) Using a -point scale ranging from  (= a very good example), to  (= a moderately good example), to  (= a very bad example or a non-example), rank the household items listed above in terms of membership in the FURNITURE category;

(ii) ask two or three people to do the same task and compare their results with your own; and (iii) discuss whether there are any items that you or your consultants would categorize as something other than furniture. If so, what would you or they rather categorize them as? 2. Choose two languages other than English, French, German, and Spanish (e.g. languages you or your classmates speak or have studied) and find out how the ‘dative’ functions featured in Figure . are coded in the chosen languages. Once you have completed this task, draw boundaries around those functions that are coded by one and the same grammatical expression. It may be the case that more than one grammatical expression may be involved, in which case there will be more than one grouping of functions on the semantic map, and some functions may also be included within the boundaries of more than one grammatical expression (i.e. overlapping between the grammatical expressions involved). 3. Consider the following English sentence in (). ()

The students like the new teacher.

While it involves two participants, the sentence in () is very low on many transitivity properties such as kinesis, aspect, punctuality, affectedness of P, and agency. For instance, the new teacher is not a patient (but a theme), and liking is not an action. Nevertheless, () behaves grammatically like transitive clauses. For instance, it has a passive counterpart, as in (). ()

The new teacher is liked by the students.

Find out how sentences like () are expressed in the languages examined for Question , that is, whether they behave much like transitive sentences.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

CATEGORIES, PROTOTYPES, AND SEMANTIC MAPS

If not, how do they behave differently from typical transitive sentences, e.g. The intruder killed the guard, in terms of case marking, verb morphology, word order, and other grammatical properties. 4. In Japanese, plural number coding depends on two factors: NP type and referent type. In this language, plural number coding works as follows (Downing : ): (i) pronouns must be coded overtly for plural number, regardless of whether they refer to human, animate (= non-human, animate), or inanimate entities; (ii) human proper nouns must be coded overtly for plural number, animate proper nouns are rarely coded overtly for plural number, and it is impossible for inanimate proper nouns to be coded overtly for plural number; and (iii) it is possible for human common nouns to be coded overtly for plural number, while animate common nouns are rarely coded overtly for plural number, and it is impossible for inanimate common nouns to be coded overtly for plural number. Based on the foregoing information, construct a semantic map of plural number coding in Japanese. Within the overall boundaries of number coding, a normal line must be used to enclose obligatory plural number coding, a double-dashed line to enclose possible plural number coding, and a single-dashed line to enclose rare plural number coding. Further reading Haspelmath, M. (). ‘ The Geometry of Grammatical Meaning: Semantic Maps and Cross-linguistic Comparison’, in M. Tomasello (ed.), The New Psychology of Language: Cognitive and Functional Approaches to Language Structure. Mahwah, NJ: Lawrence Erlbaum, –. Labov, W. (). ‘ The Boundaries of Words and Their Meanings’, in C.-J. N. Bailey and R. W. Shuy (eds.), New Ways of Analysing Variation in English. Washington: Georgetown University Press, –. Rosch, E. (). ‘On the Internal Structure of Perceptual and Semantic Categories’, in T. E. Moore (ed.), Cognitive Development and the Acquisition of Language. New York: Academic Press, –. Rosch, E. (). ‘Cognitive Representations of Semantic Categories’, Journal of Experimental Psychology: General : –. Taylor, J. (). Linguistic Categorization: Prototypes in Linguistic Theory. Oxford: Oxford University Press. van der Auwera, J. and Gast, V. (). ‘Categories and Prototypes’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Part II Empirical dimensions

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9 Phonological typology

9.1 Introduction



9.2 Segmental typology



9.3 Syllabic typology



9.4 Prosodic typology



9.5 Concluding remarks



9.1 Introduction The American phonologist Larry Hyman’s () article, included in the tenth anniversary issue of linguistic typology’s flagship journal Linguistic Typology, is entitled: ‘Where’s Phonology in Typology?’. The title seems to imply that there is a dearth of phonological research within linguistic typology to the point of the question needing to be raised in public. Indeed, phonology has not received as much attention from linguistic typologists as morphology and syntax have, even though not only pioneers of linguistic typology, e.g. Trubetzkoy () and Jakobson ( []), but also the father of modern linguistic typology, Greenberg (b), were deeply concerned with typological issues in phonology (see Hyman : ). This situation may come as a greater surprise when we consider the relatively wide availability of phonological data in the world’s languages. While it may, as discussed in §., be an easy task to find grammatical descriptions lacking in data on certain grammatical constructions, e.g. comparative or causative constructions, it is extremely difficult, if not impossible, to find grammatical descriptions that fail to provide data on what sounds (e.g. consonants, vowels) make up the phonological system in

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

the languages concerned. It may not be inaccurate to say that nearly all grammatical descriptions provide basic phonological data suitable for cross-linguistic comparison. Thus, we would expect a reasonable, if not comparable, amount of phonological research to have been carried out in linguistic typology, but the reality is very different indeed. There may be a number of reasons for this unexpected state of affairs. First of all, more linguists seem to work on morphology and syntax (or morphosyntax) than on phonology—to begin with, universities and research institutes tend to employ more syntacticians than phonologists. (To answer why this is so, we may probably need to entertain non-linguistic explanations, e.g. sociological.) Not surprisingly, there are a small number of linguists specializing in phonological typology, as opposed to a large number of linguists specializing in morphosyntactic typology. Second, there is little difference between doing phonological theory and doing phonological typology, because, as Hyman (: ) puts it, ‘one can’t do insightful [phonological] typology without addressing the same analytical issues that confront phonological theory’. For instance, two languages may have identical sounds but with very different phonological systems, which is an ‘intrinsically typological’ issue (Hyman : ). Thus, research in phonological typology may not necessarily be presented or treated as typological but rather as theoretical. Third, linguistics typically investigates the connection between form and meaning because language is ultimately a means of communication. Linguistic form is used to express meaning or to carry out function. Morphology and syntax encode meaning, but phonology does not involve the mapping between form and meaning. Rather, phonology organizes speech sounds, which are physical or acoustic entities. Given that one of the primary objectives of linguistic typology is to discover structural variation among the world’s languages (see Chapter ), it may have been a matter of priority to ascertain how one and the same meaning—or function for that matter—is expressed by different languages in the world. Since meaning does not come into the picture when conducting phonological investigations (except when determining the phoneme inventory of a language), there has perhaps been less focus or emphasis on doing phonological typology. The foregoing comments do not mean that there has hardly been any phonological research done within linguistic typology. Nothing is further from the truth. There is a (small) group of dedicated linguists whose primary concern is phonological typology, as will be surveyed in the present chapter (for a book treatment, see Gordon ). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9.1 I N T R O D U C T I O N

Similarly to other areas of research in linguistic typology, phonological typology aims to discover what is possible, as opposed to impossible, or, more realistically, probable, as opposed to improbable, in the phonological systems of the world’s languages. Thus, Maddieson (: ) outlines the primary goal of phonological typology as discovering: (i) the inventory of sounds; (ii) the sequencing of sounds (within syllable structure); and (iii) their occurrence in different structural positions (onset vs coda). First, what are the sounds that are available in the world’s languages? Related to this question may be which sounds are more common than which sounds. Also of relevance in this context are the questions regarding which sounds are found in all (known) languages (i.e. which sounds every language must have), and which sounds are infrequently or even rarely attested in the world’s languages. Second, it is of much typological interest to ascertain which sounds can be combined sequentially, that is to say which sounds can or cannot occur before or after which sounds. Once again, a modest question may be: which sounds are (un)likely to appear before or after which sounds? This is an important phonological issue, traditionally investigated under the rubric of sequential constraints in phonological theory. Third, phonological systems can also be studied in terms of what structural positions sounds may occur in. Consonants and vowels combine into larger phonological units, i.e. syllables. In particular, how many sounds can occur in onset, as opposed to coda, positions is a phonological issue of typological significance. Finally, as opposed to segmental phonology (i.e. dealing with segments or sounds), there is also non-segmental or prosodic typology, dealing with issues such as tone and stress. Needless to say, discovering various patterns of relative frequency and co-occurrence is only half the story. These patterns are in need of explanation: why are they the way they are (Maddieson : )? In this chapter, three different areas of phonological typology will be surveyed: (i) segmental typology; (ii) syllabic typology; and (iii) prosodic typology. Also discussed, in the true tradition of linguistic typology, will be possible correlations within as well as across the different areas of phonological typology, e.g. whether there is a correlation between the size of the consonant inventory and that of the vowel inventory, whether there is a correlation between the size of the consonant or vowel inventory and the complexity of syllable structure, and the like. Where available, possible explanations will be provided that deal with observed patterns of relative frequency and co-occurrence. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

9.2 Segmental typology In this section, we will look at the world’s languages in terms of consonants and vowels, and their relative frequencies of occurrence. The following online databases will be useful for purposes of discussing phonological typology (see §.): (i) PHOIBLE Online (phoible.org); (ii) UPSID (UCLA Phonological Segment Inventory Database) (linguistics.ucla.edu/faciliti/sales/software.htm or web.phonetik.unifrankfurt.de/upsid.html); (iii) The World Atlas of Language Structures Online or WALS Online (wals.info); and (iv) StressTyp (st.ullet.net). 9.2.1 Consonants When languages are evaluated in terms of consonant inventory size, there are languages with small inventories as well as languages with large inventories, not to mention languages that fall between these two groups. The difference between the smallest consonant inventory and the largest consonant inventory is staggering. Rotokas (West Bougainville: Papua New Guinea) has only six consonants, whereas !Xóõ (Southern Khoisan; Khoisan: Botswana) has as many as  consonants.1 Maddieson (a) surveys  languages in terms of consonant inventory size. The typical consonant inventory size in the sample is in the low twenties, which Maddieson (a) takes to be the average size for the consonant inventory (   consonants). Smaller than the average inventories are either small (– consonants) or moderately Table 9.1 Consonant inventory sizes and their representations Size

Frequency  (.%)

Small Moderately small

 (.%)

Average

 (.%)

Moderately large

 (.%)

Large

 (.%)

Total:

 (%)

1 Depending on one’s analysis (see Hyman , ), !Xóõ is also said to have  consonants (Maddieson : ).



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9. 2 S E G M E N T A L T Y P O L O G Y

small (– consonants) inventories. Consonant inventories larger than the average are either moderately large (– consonants) or large ( or more consonants). The languages in Maddieson’s (a) sample break down, according to his size-based criterion, as in Table .. One interesting observation that has emerged from Maddieson’s survey is that larger inventories tend to contain consonants that are considered to be more complex (i.e. sounds requiring more effort or energy in production) and are absent from smaller inventories. Maddieson’s (: ) set of more complex consonants includes: clicks, glottalized consonants, lateral fricatives and affricates, uvular and pharyngeal consonants, and dental/alveolar non-sibilant fricatives. In Maddieson’s () survey of  languages, just over one-quarter (%) of the languages with smaller than average consonant inventories have even one member of the set, just over half the languages (%) with average inventories have at least one member of the set, and over two-thirds of the languages (%) with larger than average inventories possess one or more of the more complex sounds. Incidentally, this tendency for more complex consonants to occur in larger inventories is known as the Size Principle (Lindblom and Maddieson ). When it comes to individual consonants, most languages have both voiced and voiceless plosives at bilabial, coronal, and velar places of articulation (Maddieson : ). Indeed, Hyman (: ; but cf. Hyman : ) goes so far as to propose a language universal to the effect that every phonological system has plosives—it is worth noting that Rotokas, with the smallest known consonant inventory, has only six plosives—with notational simplification—namely /p, t, k, b, d, g/ (Maddieson a). Maddieson (: ) makes further interesting observations about the frequencies of other consonants. Thus, it is also common to find voiced nasals at the same places of articulation, with the palatal nasal /ɲ/ often attested as well. Many languages are also found to have a palato-alveolar affricate /ʧ/. When it comes to fricatives, most typically, only voiceless fricatives are attested, with the most common fricative being a coronal sibilant, i.e. some kind of /s/. Many languages also have a labio-dental fricative /f/ and a palato-alveolar sibilant fricative /ʃ/. Languages tend to have two liquids, a voiced coronal lateral approximant /l/ and a rhotic /r/, the latter most frequently being an alveolar trill. Voiced palatal and labial-velar approximants or glides—i.e. /j/ and /w/, respectively—occur in the great majority of Maddieson’s sampled languages, along with the glottal plosive /ʔ/ and the voiceless approximant /h/. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

According to the PHOIBLE database, the aforementioned consonants are the most common and can be ranked in order of frequency of occurrence, as in Table .. Note that Maddieson (: ) also identifies the consonants in Table ., with the exception of /ɲ/, as belonging to the prototypical consonant inventory in the world’s languages. The total number of phonological systems included in the PHOIBLE database amounts to , (found in , distinct languages), and the frequency of occurrence of each consonant in the PHOIBLE database is presented in percentage terms in Table ..2 Table 9.2 Twenty most common consonants in the PHOIBLE database Segment

Frequency (%)

m



k



j



p



w



n



s



t



b



l



h



g



ŋ



d



f



ɲ



ʧ



?



ʃ



r



2 Some languages may have more than one variety represented, contributing more than one phonological system to the database. This explains why there are far more sound inventories than languages in the PHOIBLE database.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9. 2 S E G M E N T A L T Y P O L O G Y

Maddieson (a) also examines the areal distribution of the different consonant inventory sizes. For instance, while languages with average size consonant inventories are found in most areas of the world, languages either with smaller than average consonant inventories or with larger than average consonant inventories display areal disparities in their distribution. Thus, languages with smaller than average consonant inventories are predominantly found in the Pacific region (including New Guinea), in South America, and in the eastern part of North America. Languages with larger than average consonant inventories are well represented in three main areas: (i) Africa, particularly south of the equator; (ii) the heart of the Eurasian landmass; and (iii) the north-west of North America. There are a large number of consonants that are very infrequently or rarely attested in the world’s languages. The UPSID database and the PHIOBLE database both list such infrequent or rare sounds and their lists are very long indeed. For instance, nearly .% of the  sounds (both consonants and vowels) catalogued in the UPSID database occur in only one language, while almost % of the , sounds (both consonants and vowels) catalogued in the PHOIBLE database occur in only one phonological system. There are too many rarely occurring consonants to enumerate them all, and three will suffice: /bn/ (voiced bilabial plosive with a nasal release, as attested in Eastern Arrernte (Central Pama-Nyungan; Pama-Nyungan: Australia)), /ðɣ/ (velarized voiced dental fricative, as attested in Libyan Arabic (Semitic; Afro-Asiatic: Libya)) and /ŋˤ/ (pharyngealized velar nasal, as attested in Ju|0 hoan (Northern Khoisan; Khoisan: Angola, Namibia, and Botswana)). Maddieson (d) discusses the absence of three classes of consonants attested in the majority of the world’s languages, i.e. bilabials, fricatives, and nasals. For instance,  out of Maddieson’s (d) sample of  languages (.%) have all the three classes of consonants, while only  languages (.%) lack one or more of these classes of consonants. Only five languages (almost .%) do not have bilabial consonants; all of these languages happen to be found in North America, i.e. Tlingit (Athapaskan; Na-Dene: US), Chipewyan (Athapaskan; Na-Dene: Canada), Oneida (Northern Iroquoian; Iroquoian: US), Wichita (Caddoan: US), and Eyak (Na-Dene: US). The absence of fricatives is far more frequent than the absence of bilabials:  languages (or .%) in Maddieson’s (d) sample lack fricatives. The absence of fricatives is generally regarded as characteristic of the languages 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

of Australia, although languages lacking in fricatives are also found outside Australia, e.g. Kiribati (Oceanic; Austronesian: Kiribati), Lango (Nilotic; Nilo-Saharan: Uganda), A-Pucikwar (Great Andamanese: India), and Aleut (Aleut; Eskimo-Aleut: US). There are twelve languages (.%) in Maddieson’s (d) sample that do not have any nasal consonants, e.g. Rotokas, Pirahã (Mura: Brazil), and Quileute (Chimakuan: US). Maddieson (d), however, takes note of the fact that virtually all languages have ‘[t]he ability to direct the flow of air through the nose’. In other words, when we say that languages such as Rotokas do not have nasal consonants, what we mean is that nasal consonants are not used as contrastive sounds (i.e. phonemes). Maddieson (e) also examines the presence of uncommon consonants such as clicks, e.g. /ʇ, ʖ, ʗ/, labial-velars, e.g. /k͡p, ɡ͡b/, pharyngeals, i.e. /ħ, ʕ/ and dental/alveolar non-sibilant fricatives, i.e. /θ, ð/. The status of these consonants as uncommonly attested sounds is clearly supported by the fact that  (.%) out of Maddieson’s (e)  sampled languages do not have any of these uncommon consonants. Clicks occur in ten languages (.%), labial-velar plosives in  languages (.%), pharyngeals in  languages (.%) and dental/alveolar non-sibilant fricatives in  languages (.%). There is only one language that has clicks, pharyngeals, and dental/alveolar non-sibilant fricatives, i.e. Dahalo (Southern-Kushitic; Afro-Asiatic: Kenya), and two languages that have pharyngeals and dental/alveolar non-sibilant fricatives, i.e. Nenets (Samoyedic; Uralic: Russia) and Kabardian (North-West Caucasian: Russia). All the Khoisan languages of southern Africa have clicks, and some languages in East Africa also have clicks. Labial-velar plosives are found in only two areas of the world, namely West and Central Africa, and the eastern end of New Guinea. Pharyngeals occur in a number of Afro-Asiatic languages, with a moderate concentration of such languages in northern and eastern Africa and neighbouring Arabia. Another group of languages with pharyngeals is located in the Caucasus. The areal distribution of dental/ alveolar non-sibilant fricatives is ‘practically world-wide’, although only .% of Maddieson’s (e) sampled languages have them. Earlier, we mentioned the Size Principle, whereby it is predicted that more complex sounds will be more frequently found in languages with larger consonant inventories. This prediction is borne out by Maddieson’s (e) data, as reproduced in Table .. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9. 2 S E G M E N T A L T Y P O L O G Y

Table 9.3 Languages with uncommon consonants by consonant inventory size inventory size

per cent with any of the uncommon consonants .

small

.

moderately small average

.

moderately large

.

large

.

From Table . a clear pattern emerges that the proportion of languages with one or more uncommon consonants increases as the consonant inventory size increases. 9.2.2 Vowels In this section, we will focus on three physical dimensions pertaining to the basic quality or ‘timbre’ of a vowel sound: height, frontness/ backness, and lip position (i.e. rounded vs unrounded). In Maddieson’s (b) survey of  languages, the average number of vowels is just under six. The smallest known vowel inventory is two, and the largest is fourteen. There are four languages with only two vowel qualities, two examples of which are Yimas (Lower Sepik; Lower Sepik-Ramu: Papua New Guinea) and Abkhaz (North-West Caucasian: Georgia). There is only one language with fourteen vowel qualities, namely German (Germanic; Indo-European: Germany). Only two languages have thirteen vowel qualities, i.e. British English (Germanic; Indo-European: UK) and Bété (Kru; Niger-Congo: Côte d’Ivoire). Maddieson (b) reports that considerably more languages have an inventory of five vowels than any number, i.e.  (.%) of the sampled languages. The next most frequent vowel inventory size is six, and these two inventory sizes account for just over half of the  sampled languages. Table . provides the frequencies of the two sizes, i.e. five and six vowels, classified together as average, and two more vowel inventory sizes, i.e. small and large, in Maddieson’s (b) sample. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

Table 9.4 Vowel quality inventories and their representations Size

Frequency  (.%)

small (– vowels) average (– vowels)

 (.%)

large (– vowels)

 (.%)

Total:

 (%)

Maddieson (: ) also notes that other properties of vowels such as nasalization, pharyngealization, and phonation types other than normal voicing (e.g. creaky, breathy voice) tend to occur in languages with larger than average vowel inventories. Maddieson (b) observes that the most common or the prototypical set of vowel qualities comprises the following five: /i, e, a, o, u/. The frequency of occurrence of each of these five vowel qualities, according to the PHOIBLE database, is represented in per cent terms in Table .. As is the case with the twenty most common consonants, the total number of the sound inventories surveyed in PHOIBLE is ,. Table 9.5 The five most common vowel qualities Vowel

Frequency (%)

i



a



u



o



e



As is the case with consonants (see §..), there are also a large number of vowels that are very infrequently or rarely attested in the world’s languages. There are too many such vowels to list them all here, and three will suffice: /ʊ̯/ (non-syllabic near-high, near-back vowel, as attested in Khasi (Khasian; Austro-Asiatic: India)), /ə̘:/ (long schwa with advanced tongue root, as attested in Khmer (Austro-Asiatic: Cambodia)), and /ø̃)/ (nasalized mid-high, front, rounded vowel, as attested in Chuvash (Turkic; Altaic: Russia)). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9. 2 S E G M E N T A L T Y P O L O G Y

Maddieson (: ) points out that the three dimensions of the vowel quality can themselves be ranked with respect to each other to the effect that height takes precedence over frontness/backness, which, in turn, take precedence over rounding. In fact, ‘[n]o language is known which does not have some distinctions of height’ (Maddieson : ; see also Hyman : ,  for the same point). Indeed, some languages in the North-West Caucasian, Arandic, and Chadic families are said to have height as the only underlying contrast. The relative precedence of height in vowel quality over frontness/backness or over rounding is also evident in that vowel harmony involving backness and/or rounding is much less common than vowel harmony involving what is fundamentally height (Maddieson : –). Moreover, no languages are known to make use of rounding without also having some kind of variation in frontness/backness (Maddieson : ); based on the UPSID database, Schwartz et al. (: ) arrive at a similar conclusion that no languages have a rounded front vowel of a given height without its unrounded counterpart. Maddieson (b) also discusses the areal distribution of the different vowel inventory sizes. Given the fact that just over half of the sampled languages have average vowel inventories, it comes as no surprise that languages with average vowel inventory sizes are the most widely distributed in the world. Languages with small vowel inventories are frequently found among languages of the Americas. In Australia as well, small vowel inventories seem to be the norm. However, small vowel inventories rarely occur in the remaining areas of the world, i.e. Africa, the entire Eurasian mainland, New Guinea, and the Pacific islands. In Africa, between the equator and the Sahara, the predominant vowel inventory size is the large one. The concentration of languages with large vowel inventories is also detected for interior South East Asia, southern China, much of Europe and, albeit on a smaller scale, interior New Guinea as well. Hajek () surveys  languages in terms of the presence or absence of nasal vowels. There are more languages without nasal vowels ( or .%) than languages with nasal vowels ( or .%). As already pointed out in §., Hajek also notes that in languages with both oral and nasal vowels, the number of nasal vowels tends to be smaller than that of oral vowels. This disparity in number is attested in approximately % of his sample. In point of fact, there seem to be virtually no exceptions to the earlier universal claim (Ferguson ; 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

Maddieson : –) that the number of nasal vowels is either the same as, or smaller than, that of oral vowels. 9.2.3 Consonant–vowel ratios One apparently popular view among linguists is that complexity in one part of the grammar of a language is likely to be compensated for by simplification in another (Maddieson : ).3 When applied to the consonant and vowel inventory sizes, what this view may entail is that a small or large consonant inventory may co-occur with a large or small vowel inventory, respectively. Maddieson (c; , ) sets out to test this entailed inverse correlation. To that end, Maddieson calculates a consonant-to-vowel quality ratio (C/VQ) for every language by dividing the number of consonants by the number of vowels. The resulting ratios range from a little over  to . For instance, Andoke (isolate: Columbia) has ten consonants and nine vowels (C/VQ = .), while Abkhaz has  consonants and only two vowels (C/VQ = ). Only ten languages have C/VQ ratios of twelve or higher. The more typical C/VQ ratios are closer to the lower end of the range, the mean being . and the median being .. Maddieson (c) notes that languages with C/VQ ratios of about three or four are particularly commonly found in West Africa, and quite commonly attested in New Guinea, island Asia, South East Asia, Central America, the eastern side of South America, and northern Eurasia. These areas are also where languages with C/VQ ratios lower than . are commonly found. Languages with C/VQ ratios of . or higher are predominantly attested in Australia, most of eastern and southern Africa, and the western side of the Americas. There are also smaller clusters of languages with C/VQ ratios above . in the Caucasus, southern Africa, and, most considerably, the north-west of North America. When the co-occurrence of consonant and vowel inventory sizes in  languages is examined, we have the data in Table ., as reproduced from Maddieson (c).

3

This view may be related ultimately to the strongly held humanistic position in linguistics that there are no primitive languages because all languages are at a similar level of complexity. Languages may be complex in some areas (e.g. morphology) but simple in other areas (e.g. syntax) to the effect that the differences in complexity even out.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9. 3 S Y L LA B I C T Y P O L O G Y

Table 9.6 Co-occurrence of vowel quality and consonant inventory sizes Consonant inventory size small

moderately small

average

moderately large

large

Vowel size small











average











large











The most telling aspect of Table . is that there are thirteen languages with small consonant and vowel inventories, and also thirteen languages with large consonant and vowel inventories. Clearly, there is no inverse correlation between the consonant and vowel inventory sizes. Moreover, there are thirteen languages that combine large consonant inventories with small vowel inventories, and twenty languages that combine large vowel inventories with small consonant inventories. To wit, there does not seem to be any evidence in support of the widely held view that complexity in one part of a language is compensated by simplification in another, in so far as the distribution of consonants and vowels is concerned. In fact, if there is any correlation between consonants and vowels, we must mention the general tendency for the proportion of consonants in the entire sound inventory to increase as the overall number of sounds increases. Maddieson (: ) suggests that this may simply be the consequence of consonants having more potential dimensions of contrast between different consonant types than between vowels.

9.3 Syllabic typology Sounds combine into larger structural units, one type of which will be surveyed in this section from a cross-linguistic perspective: syllable structure. The most common syllable structure is that of the sequence of one consonant and one vowel in that order—conventionally written as C(onsonant)V(owel)—as this syllable structure is said to be found in all languages;4 a small number of languages permit only this syllable Hyman (: ; originally Hyman ; but cf. Hyman ) claims that Gokana (Cross River; Niger-Congo: Nigeria) has no evidence of syllables in its segmental or 4



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

structure, e.g. Hawaiian (Oceanic; Austronesian: Hawaii), Mba (Ubangi; Niger-Congo: Democratic Republic of Congo) (Maddieson f). More frequently found, however, are languages that also allow the basic CV structure not to contain an initial consonant; in languages such as Fijian (Oceanic; Austronesian: Fiji), Igbo (Igboid; Niger-Congo: Nigeria), and Yareba (Yareban: Papua New Guinea), the basic syllable structure is (C)V, where C is enclosed within parentheses to indicate its optionality, as it were, although there are also some languages in which all syllables must begin with a consonant, e.g. Mon-Khmer languages (Maddieson : ). A few languages, e.g. Central Arrernte (Central Pama-Nyungan; Pama-Nyungan: Australia), have been claimed to have VC instead of CV as their basic syllable structure (e.g. Breen and Pensalfini ), but this claim is open to debate (Maddieson f). In Maddieson’s (f) syllabic typology, languages that operate with (C)V are classified as having simple syllable structure. A more complex syllable structure adds one consonant either to the onset position or to the coda position, that is, CCV or CVC. Maddieson (f) qualifies the occurrence of two consonants in CCV by saying that there may be a strict condition on what kinds of consonants are allowed in the onset position. In a large number of languages, the second consonant in the CCV structure tends to come from either the class of liquids or the class of glides (or approximants) (see Parker b on the choice between liquids and glides in the CCV structure in  languages from the perspective of sonority; see below). In Maddieson’s (f) syllabic typology, languages that allow two consonants to appear in the onset position with the condition that the second consonant be either a liquid or a glide, and/or allow one consonant to appear in the coda position are regarded as having moderately complex syllable structure. Languages which do not have the aforementioned condition on the two consonants in the CCV structure or which allow three or more consonants in the onset position and/or two or more consonants in the coda position are said to have complex syllable structure in Maddieson’s (f) syllabic typology. Maddieson’s (f) sample for his syllabic typology has  languages, which are distributed according to the level of complexity in syllable structure, as in Table ..

prosodic phonology or morphology. Hyman (: ) mentions Bella Coola (Salishan: North America) as another language that has been claimed to lack syllables.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

9. 3 S Y L LA B I C T Y P O L O G Y

Table 9.7 Complexity of syllable structure Degree of complexity

Frequency

Simple syllable structure

 (.%)

Moderately complex syllable structure

 (.%)

Complex syllable structure

 (.%)

Total:

 (%)

Well over half of Maddieson’s sampled languages have moderately complex syllable structure. While this type of syllable structure is widespread, its presence is particularly prominent in Africa, the more easterly part of Asia, and much of Australia. It is interesting to note that languages with simple syllable structure tend to be located, near the equator, in Africa, New Guinea, and South America. Complex syllable structure tends to be predominantly found in languages spoken in the northern two-thirds of the northern hemisphere, namely northern Eurasia and northern North America. There is also a small concentration of languages with complex syllable structure in northern Australia. While most languages require the nucleus of a syllable to be a vowel, a small number of languages allow consonants to be the nucleus of a syllable (Maddieson : ). For instance, Yoruba has syllabic nasals, and Czech permits liquids and, albeit rarely, also nasals to be syllabic. The most extreme example of non-vowels used as the nucleus of a syllable probably comes from Tashlhiyt (Berber; Afro-Asiatic: Morocco), in which even voiceless fricatives and plosives can be syllabic, e.g. /k.kst.ʧ.ʃtt/ ‘remove it and eat it’ (where the dot mark indicates a syllable boundary). Maddieson (, , f) also makes an attempt to ascertain whether there are any correlations between syllable complexity and other phonological properties, e.g. consonant inventory size, vowel quality types (i.e. distinct vowel types in terms of height, frontness/ backness, and rounding), and vowel inventory size. First, Maddieson () fails to detect any systematic correlation between syllable complexity, and either vowel quality inventory or total vowel inventory size. These results are based on data in  languages for syllable complexity and vowel quality values, and on data in  languages for syllable complexity and total vowel inventory (Maddieson : –). However, Maddieson (, , f) notes that there is a significant, if 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

PHONOLOGICAL TYPOLOGY

not strong, correlation between syllable complexity and consonant inventory size (i.e. the correlation coefficient is . and the statistical significance level is p SVO > VSO > VOS > OVS > OSV (where ‘>’ means ‘more frequent than’) Unlike previous researchers, however, Tomlin () applies the chisquare statistic to determine whether there is any statistical significance between the frequencies of the six word orders. Thus, he recognizes no statistical significance in the difference between SOV and SVO, i.e. due to chance. The difference between VOS and OVS, in contrast, is significant at the . level but since his sample only contains a very small number of these languages (i.e. twelve VOS languages and five OVS languages), and since there is uncertainty about the status of the reported OVS languages (see Polinskaya ), Tomlin dismisses the statistical significance in question. This gives rise to a rather different frequency hierarchy: () SOV = SVO > VSO > VOS = OVS > OSV Tomlin () puts forth three functional principles to account for the relative frequencies of the six basic word orders: the Theme First Principle (TFP), the Animated First Principle (AFP), and the Verb–Object Bonding 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

Principle (VOB). The TFP is designed to capture the tendency that in clauses, relatively more thematic information precedes less thematic information. Based on Keenan (), Tomlin () assumes that more thematic information and S correlate with each other in basic sentences. The AFP subsumes two basic hierarchies, i.e. animacy and semantic roles. Thus, in simple basic transitive clauses, more animated NPs precede less animated NPs. Where in conflict (e.g. both NPs are (in)animate), however, the semantic roles hierarchy—ranking Agent, Instrumental, Benefactive/Dative, and Patient in that descending order—takes precedence over the animacy hierarchy. Moreover, the most animated NP in a basic transitive clause is identified with the subject of that clause. Lastly, the VOB is built on the observation that there is a stronger degree of syntactic and semantic cohesiveness between O and V than between S and V (see Behaghel’s First Law). In support of the three functional principles, Tomlin () adduces a wide range of data including cross-linguistic as well as psycholinguistic and textual evidence. Tomlin () goes on to argue that the more of the three principles are realized in a given word order, the more frequent among the world’s languages that word order is. For instance, in SOV and SVO, the two most frequent word orders, all the three principles are satisfied, i.e. S before O (in compliance with the TFP and AFP), and the contiguity of V and O (in obedience to the VOB); in OSV, in contrast, none of the principles are obeyed, i.e. O before S (in violation of the TFP and AFP), and the interruption of V and O by S (in opposition to the VOB). As the reader can easily verify, there is a perfect correlation between Tomlin’s functional explanation and the revised frequency hierarchy in (). While Tomlin’s principles seem to explain well the way the six different word orders are distributed in the world’s languages, there are certain important issues that demand attention. First, the statistically insignificant difference between SOV and SVO claimed by Tomlin is, as a matter of fact, not well supported by subsequent research, based on better language samples. In particular, Dryer (: –) demonstrates a clear linguistic preference for SOV over SVO in the world’s languages, as in () (Afr(ica), Eura(sia), A(ustralia)-N(ew)G(uinea), N(orth)Am(erica), S(outh)Am(erica)). ()

Afr Eura A-NG NAm SAm Total SOV       SVO       

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 3 E A R L Y W O R D O R D E R R E S E A R C H

Note that the numbers in () represent not exemplifying languages but genera (see §. for discussion on Dryer’s sampling method). In (), SOV outnumbers SVO by five macroareas to nil, confirming that there is an unequivocal linguistic preference for SOV over SVO in the world. (Statistically speaking, there would be one chance in thirty-two (i.e.          = ) for all five macroareas to display a preference for SOV, if there were no linguistic preference for it.) Moreover, in each of the five macroareas the number of genera containing SOV languages is greater than the number of genera containing SVO languages although, admittedly, the difference in Africa is very small. Dryer () reasons that the lack of a statistical difference between SOV and SVO, as claimed by Tomlin (), is caused largely by the overrepresentation of SVO languages from Niger-Congo and from Austronesian. Needless to say, the linguistic preference of SOV over SVO plays havoc with the equal distribution of SOV and SVO, as claimed in (), and, consequently, with Tomlin’s explanation thereof. Second, Tomlin () seems to treat the three principles as equivalent. For instance, he calculates the total score for SOV and VSO at  (i.e. realization of the three principles), and  (i.e. realization of the TFP and AFP but not the VOB), respectively. Thus, it does not matter whether V precedes or follows S. The relative position of S with respect to V has no bearing on the value assigned to the realization of the TFP or the AFP. It is only the relative position of S in opposition to O that counts in Tomlin’s calculation. Though in this way one can account for the relative frequencies of the six word orders in comparison with one another, one cannot explain the substantial difference that exists between the frequencies of the S-initial word orders, i.e. SOV and SVO, and those of the rest, as famously identified in Greenberg’s (b) work (see () above). What is being suggested here is that the realization of both the TFP and the AFP must carry more weight in SOV and SVO—that is, when S precedes not only O but also V—than in VSO, wherein S precedes only O (Song ). Otherwise one cannot address the most palpable difference between the two S-initial word orders and the four non-S-initial word orders. In Table ., the aggregate of the percentages of the two S-initial word orders is indeed almost . times larger than the aggregate of the percentages of the four non-Sinitial word orders or a substantial difference of .%. Third, the TFP and the AFP are inherently semantically and/or pragmatically based. The first pertains largely to ‘the selective focus of 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

attention on information during discourse production and comprehension’ (Tomlin : ), whereas the second arises from animacy and semantic roles (Tomlin : –). These two principles, in turn, are reflected grammatically by more thematic and/or more animated NPs preceding less thematic and/or less animated NPs, that is, S before O. In contrast, the VOB, as is formulated by Tomlin (: ), is largely a grammatical principle, representing the formal bondedness of V and O in opposition to S. This particular principle, in turn, may probably be motivated by a (number of) independent semantic and/or pragmatic principle(s), whatever it (or they) may be. Indeed Tomlin (: –) refers to Keenan (), who explores the close relationship between the verb and the object from a much wider perspective. Keenan () explores a number of such semantic–pragmatic principles that may actually underlie the VOB: e.g. existence dependency, sense dependency, specific selectional restrictions. For instance, the referent of the object—also that of the intransitive subject for that matter but not that of the transitive subject—often comes into existence as a direct result of the activity expressed by the predicate (or verb); in The woman built a shed in the garden, the shed, unlike the woman, does not exist independently of the activity of building. Hence, the existence dependency relation between the verb and the object.5 The VOB itself may only be a grammatical reflection of multiple semantic–pragmatic principles. To wit, there is a conceptual disparity between the TFP and the AFP on the one hand and the VOB on the other. For the sake of argument, suppose the VOB is eventually replaced by two semantic–pragmatic principles, say, X and Y. In this interpretation, the ‘scoreboard’ of the six word orders will have to be changed. For instance, the word orders that 5 Sense dependency can be illustrated by the tendency for the sense of a predicate to vary with the semantic nature of the referent of the object NP. For example, the exact meaning of the verb cut in the following examples depends on the meaning of the object NP (Keenan : ). (i) John cut his arm/his foot. (ii) John cut his nails/his hair/the lawn. (iii) John cut the cake/the roast. (iv) John cut a path (through the field)/a tunnel (through the mountain). (v) John cuts his whisky with water/his marijuana with tea. (vi) The company cut production quotas/prices. (vii) The rock band, Mahth, cut their first single in . Predicates may also impose specific selectional restrictions on the object NP (and on the intransitive subject) but weak and general restrictions such as humanness, animacy, or concreteness on the transitive subject NP.



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 4 T H E O V – V O T Y P O L O G Y A N D T H E B R A N C H I N G D I R E C T I O N T H E O R Y

satisfy the VOB should each be given a value of two, not one. The four principles, i.e. the TFP, AFP, X, and Y, would now predict that there will be no (statistical) distributional difference between VSO, VOS, and OVS in the world’s languages. This prediction, however, is not borne out empirically. We can turn this argument around by claiming that the TFP and the AFP are subsumed by a single grammatical principle, e.g. Keenan’s () Subjects Front Principle. This will give rise to the same kind of problem.

10.4 The OV–VO typology and the Branching Direction Theory Dryer () is undoubtedly the most comprehensive empirical study of word order that has ever appeared (also see his updated work in Dryer and Haspelmath ). Its comprehensiveness does not only lie in the range of word order patterns and correlations examined but also in the size of its language sample. In this landmark work, the correlation between the order of twenty-four pairs of elements, and the order of V and O is tested on the basis of a database of  languages, with most of his data derived from a -language subset of that database (Dryer : ). Dryer () determines the distribution of a given property in the six—or five in Dryer ()—large linguistic areas by counting genera, not individual languages, with a view to sifting linguistic preferences from the effect of genetic relatedness and areal diffusion (see §. for detailed discussion of Dryer’s sampling method).6 10.4.1 Back to the OV–VO typology Dryer’s (, ) work on basic word order is best characterized as a return to the OV–VO typology, which was promoted in Lehmann (, b, c) and Vennemann () but subsequently rejected by Hawkins () in favour of the adposition as the reliable type Unlike Hawkins (), Dryer () is not concerned with finding exceptionless language universals but rather with statistical universals or linguistic preferences. In a way, this is dictated by his decision to count genera rather than individual languages. Exceptionless universals must by definition be absolute, admitting not a single counterexample. But by counting genera, not languages, Dryer () has no way of telling whether a given universal is exceptionless since genera are genetic groups of languages. 6



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

predictor of word order properties. Dryer (: –) takes the order of V and O to be the basic predictor of other word order properties, hence the OV–VO typology. While Greenberg’s (b) tripartite typology of VSO, SVO, and SOV can be interpreted to be V-based—i.e. V⃞ SO ! V-initial, SV⃞ O ! V-medial, and SOV⃞ ! V-final—it is clear from his discussion that SVO plays no significant role in the predicting of other word order properties. When all word order co-occurrences in Greenberg’s (b) sample are examined with respect to the verb position (Hawkins : ), SVO can only be seen as a kind of mixed type between SOV and VSO, albeit inclining slightly towards the latter (Hawkins : ). For example, consider the distribution of adpositions, which is perhaps most indicative of the alleged mixed status of SVO in Greenberg (b: ). Table 10.4 Distribution of PrN/NPo in the three most common word orders VSO

SVO

SOV

PrN







NPo







In Table ., SVO has both PrN and NPo exponents, whereas it is decisively PrN or NPo for VSO and SOV, respectively. This typologically ambivalent position of SVO leads Hawkins (: ; also –) to remark that ‘nothing correlates with SVO in a unique and principled way’. Comrie (: ) echoes Hawkins’s conclusion by averring that ‘knowing that a language is SVO, we can predict virtually nothing else’. Dryer (), however, demonstrates that this widely believed ambivalence as a basic word order type of SVO is overstated, with the validity of the OV–VO typology underestimated. He argues that both Lehmann and Vennemann were essentially correct in advancing the OV–VO typology. His evidence does indeed show that in general, the word order properties of SVO languages differ little from those of the other two VO-type languages, i.e. VSO and VOS. Thus, he (: ) comes to the conclusion in favour of the OV–VO typology: with respect to a large number of word order properties, there is ‘a basic split between VO and OV languages’. Dryer’s evidence in support of the OV–VO split comes basically from two sources: (i) it is as difficult to 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 4 T H E O V – V O T Y P O L O G Y A N D T H E B R A N C H I N G D I R E C T I O N T H E O R Y

come up with exceptionless universals about V-initial languages as about SVO languages; and (ii) where there is a tendency for V-initial languages to possess a property, SVO languages also possess that property.7 Since there are a large number of word order properties that are characteristic of both V-initial and SVO languages, the basic distinction between OV and VO languages is fully justifiable, providing a surfeit of support for the OV–VO typology itself. 10.4.2 Inadequacy of Head-Dependent Theory Dryer () has two primary aims: (i) to identify the pairs of elements whose order correlates with that of V and O; and (ii) to explain why such correlations exist. In the process, he also attempts to demonstrate the inadequacy of Head-Dependent Theory (or HDT), which captures the very—if not the most—popular view that there is a linguistic preference for dependents to be placed consistently on one side of heads, that is, either before or after heads. Vennemann () and Hawkins (), as has already been discussed, draw heavily upon HDT, their differences in technical detail notwithstanding. Dryer () argues that although HDT adequately accounts for six pairs of elements including the order of N and G, and that of V and manner adverb, there are certain pairs of elements that, contrary to what HDT predicts, do not correlate at all with the order of V and O, and that, for some other pairs of elements, what HDT predicts is completely the opposite of what is attested in his data. He also points out that different theories often disagree on which element is to be taken as the head or the dependent. In what follows, an example of each of these three problematic cases will be provided. In common with Dryer (: ), it is assumed here that if in a given pair of elements X and Y, X tends to precede Y (statistically) significantly more frequently in VO languages than in OV languages, then is a correlation pair, with X being a verb patterner, and Y an object patterner with respect to that correlation pair.

Dryer (: –) does not make a distinction between the two types of V-initial languages, i.e. VSO and VOS, because ‘there is no evidence that VSO languages behave differently from other V-initial languages, either VOS languages or V-initial languages which are neither clearly VSO nor clearly VOS’ (Dryer : ). 7



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

First, Dryer () demonstrates convincingly that the order of N and A does not correlate with that of V and O—contrary to the widely held view (e.g. Lehmann , b, c; Vennemann ) that OV and VO languages tend to be AN and NA, respectively. This non-correlation is confirmed again by Dryer (: ) on the basis of his large sample, as reproduced in Table .. The average proportion of genera for AN is higher among VO languages than among OV languages. In fact, the putative correlation between OV and AN turns out to be largely the result of dominance of that correlation in Eurasia. This finding completely contradicts what HDT predicts about the order of A and N, and that of V and O. Table 10.5 Distribution of AN/NA in OV and VO languages Afr

Eura

SEAsia&Oc

A-NG

NAm

SAm

Total

OV&AN



⃞











OV&NA

⃞



⃞

⃞

⃞

⃞



VO&AN



⃞



⃞

⃞





VO&NA

⃞



⃞





⃞



The data exemplifying the four co-occurrences in Table . are presented in ()–(). ()

Korean (isolate: Korean) [SOV&AN] a. kiho-ka hoanca-lul omki-ess-ta Keeho-NOM patient-ACC move-PST-IND ‘Keeho moved the patient.’ b. cengcikhan salam-tul honest person-PL ‘honest people’

()

Slave (Athapaskan; Na-Dene: Canada) [SOV&NA] a. t’eere lį ráreyįht’u girl dog .hit ‘The girl hit the dog.’ b. tlį nechá dog big ‘big dog’ 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 4 T H E O V – V O T Y P O L O G Y A N D T H E B R A N C H I N G D I R E C T I O N T H E O R Y

() Rukai (Austronesian: Taiwan) [VOS&AN] a. wauŋul sa acilay kay maruḏaŋ drank INDEF.ACC water this.NOM old.man ‘This old man drank water.’ b. kayvay maḏaw daan this big house ‘this big house’ () Fijian (Oceanic; Austronesian: Fiji) [VSO/VOS&NA] a. e rai-ca a gone a qase SG see-TR ART child ART old.person ‘The old person saw the child.’ or ‘The child saw the old person.’ b. a ‘olii loa ART dog black ‘black dog’ Moreover, under HDT—and indeed under the standard view too— articles are taken to be a type of modifier (i.e. dependent) of N, which, in turn, is assumed to be a head. The order of N and Art(icle) is thus predicted to correlate with that of V and O. This, however, is not the case, as can be clearly seen from Table . (Dryer : ). While it is more common among OV languages in only two macroareas (Eurasia and South America), the ArtN order is far more common among VO languages in as many as five macroareas—this, incidentally, suggests that the NArt order may be an areal feature of Africa. Table 10.6 Distribution of NArt/ArtN in OV and VO languages Afr

Eura

SEAsia&Oc

A-NG

OV&NArt

⃞



⃞

⃞

OV&ArtN



⃞





.

.

.

.

Proportion NArt

NAm

SAm

Total









⃞



.

Avg. = .

.

VO&NArt

⃞













VO&ArtN



⃞

⃞

⃞

⃞

⃞



.

.

.

.

.

.

Avg. = .

Proportion NArt

The predominance of the ArtN order among VO languages (as already exemplified by Fijian ()) is further substantiated by the higher average proportion of genera figure of NArt among OV languages, as 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

exemplified by (), than among VO languages in all six macroareas although this is trivial in Africa (i.e. . vs .). ()

Kobon (Madang; Trans-New Guinea: Papua New Guinea) [SOV&NArt] ñi ap wañib i ud ar-nɨm Dusin laŋ boy ART string.bag this take go-should.SG Dusin above ‘A boy should take this string bag up to Dusin.’

In the correlation pair of N and Art, therefore, Art is a verb patterner, whereas N that it combines with is an object patterner (also see Dryer ). This is the converse of what is predicted by HDT.8 The correlation pair of auxiliary verb and content (or main) verb illustrates that HDT may make correct or incorrect predictions depending on which element of the pair is taken to be a head or a dependent. In traditional and early Generative Grammar (e.g. Chomsky ; Lightfoot ), the content verb is the head with the auxiliary verb being the dependent. In the alternative view (Vennemann ; Pullum and Wilson ; Gazdar et al. ; Schachter ), in contrast, the auxiliary verb is the head, with the content verb being the dependent. The statistical data provided by Dryer (: ), as in Table ., indicate unquestionably that the auxiliary is a verb patterner (or head), and the content verb an object patterner (or dependent). Table 10.7 Distribution of VAux/AuxV in OV and VO languages Afr

Eura

SEAsia&Oc

A-NG

NAm

SAm

Total

OV&VAux

⃞

⃞

⃞

⃞

⃞

⃞



OV&AuxV















VO&VAux







⃞







VO&AuxV

⃞

⃞

⃞



⃞





For instance, Slave, an (S)OV language, places V just before Aux, and Turkana, a V(S)O language, has these two elements in the opposite order, i.e. AuxV.

Under the so-called DP Hypothesis in Generative Grammar (e.g. Abney ), the determiner is taken to be the head of the noun phrase (see Dryer : ). 8



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 4 T H E O V – V O T Y P O L O G Y A N D T H E B R A N C H I N G D I R E C T I O N T H E O R Y

() Slave (Athapaskan; Na-Dene: Canada) [SOV&VAux] bets’é wohse wolé .to SG.shout.OPT be.OPT ‘I will shout to him/her.’ () Turkana (Nilotic; Nilo-Saharan: Kenya and Uganda) [VSO&AuxV] kì-pon-ì atɔ-mat-à PL-go-ASP PL-drink-PL ‘We shall drink.’ Note that only under the assumption that the auxiliary verb is the head, HDT makes the correct prediction about the correlation pair in question although that assumption is, to say the least, highly theory-internal. 10.4.3 Branching Direction Theory (BDT) There is a sufficient amount of evidence, as discussed in §.., to reject HDT as an explanation of the word order correlations. What can be an alternative explanation for the various correlations (and noncorrelations) documented in Dryer’s work? Dryer () first observes that although they are both adjunct dependents of N, Rel is an object patterner whereas A is not. He suggests that the contrast between these two nominal modifiers—also between intensifiers and standards of comparison (both dependents of the adjective), between negative particles and adpositional phrases (both dependents of the verb), etc.—is attributable to the fact that relative clauses (standards of comparison, and adpositional phrases) are phrasal, whereas adjectives (intensifiers, and negative particles) are non-phrasal. This leads him to propose what he calls Branching Direction Theory (BDT), which is based on the consistent ordering of branching (or phrasal) and non-branching (or non-phrasal) categories. () Branching Direction Theory (BDT) Verb patterners are non-phrasal (non-branching, lexical) categories and object patterners are phrasal (branching) categories. That is, a pair of elements X and Y will employ the order XY significantly more often among VO languages than among OV languages if and only if X is a non-phrasal category and Y is a phrasal category. What BDT predicts is that languages tend either towards right-branching in which phrasal categories follow non-phrasal categories or towards left-branching in which phrasal categories precede non-phrasal 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

categories. Thus, the fundamental distinction between VO and OV languages boils down to their opposite branching direction: right branching and left branching, respectively. As a brief illustration of how this works, compare English, a VO language, and Korean, an OV language. Note that for the sake of convenience, a triangle is used to represent a constituent structure involving branching or phrasal categories. For example, the NP, the girl, in (a) ‘branches’ into two constituents, Art the and N (or more accurately, N´, as will presently be shown) girl. () English a. VP

V

NP

kissed

the girl NP

b.

N

friends c.

G

of Mary NP

N

girls

Rel

who are singing in the room 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 4 T H E O V – V O T Y P O L O G Y A N D T H E B R A N C H I N G D I R E C T I O N T H E O R Y

()

Korean a.

VP

NP

V

ku cha-lul the car-ACC

sa-ss-ta buy-PST-IND

NP

b. G

N

kiho-uy Keeho-GEN c.

cikcang workplace NP

Rel

cam-ul sleep-ACC

N

ca-nun sleep-REL

ai child

In English, all the phrasal categories, i.e. NP, G, and Rel (represented by triangles), follow the non-phrasal categories, i.e. V, N, and N (represented by straight vertical lines) in (a), (b), and (c), respectively. In Korean, in contrast, the ordering of the phrasal and non-phrasal categories is the converse, with the phrasal categories preceding the non-phrasal categories, as in (). 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

Returning to the correlation pairs problematic for HDT, the pair of Art and N, and that of Aux and V can now be accounted for by BDT. In VO languages, there is a preference of ArtN, whereas in OV languages there is a tendency towards NArt (see Table .); Art is a verb patterner, and N with which it combines an object patterner. To put it in terms of BDT, the ArtN order is predominant in VO languages because in these languages non-phrasal categories (e.g. V, Art) precede phrasal categories (e.g. O, N), whereas in OV languages N precedes Art because the preferred branching direction of these languages is phrasal categories before non-phrasal categories. A similar comment can also be made with respect to the correlation pair of Aux and V. Being ‘a [nonphrasal] verb that is subcategorized to take a VP complement [that is, phrasal]’ (Dryer : ), Aux is predicted to precede V (or VP) in VO languages, and the opposite ordering of these two is predicted for OV languages. Indeed, these predictions are borne out by the data in Table .. The reason why the order of A and N is not a correlation pair with respect to the order of V and O is, according to Dryer (: –), that A and N which A combines with are both non-phrasal categories. For pairs like this, BDT makes no predictions. The nonphrasal status of A, however, may strike one as questionable, especially because A can be modified by intensifiers or degree words such as very, extremely clearly forming adjective phrases (i.e. phrasal and branching). Moreover, if N, when used in conjunction with A, is non-phrasal (e.g. very tall girls), why is it that N is regarded as phrasal when it combines with Art (e.g. the girls)? Dryer explains that although they may be phrasal, modifying adjectives are not fully recursive in that they rarely involve other major phrasal categories such as NPs, PPs, or subordinate clauses. The use of intensifiers is also very limited or restricted, with only a handful of them taking part in the forming of adjective phrases such as very tall. In other words, recursiveness must also be taken into account in distinguishing verb patterners from object patterners or vice versa: verb patterners are either non-phrasal categories or phrasal categories that are not fully recursive (e.g. A), whereas object patterners are fully recursive phrasal categories. Moreover, Art does not simply combine with N to form NPs, but rather they join together with constituents larger than N, that is, N´ (read: N-bar) potentially consisting of A and N. Put in a different way, Art forms NP with the fully recursive phrasal 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 4 T H E O V – V O T Y P O L O G Y A N D T H E B R A N C H I N G D I R E C T I O N T H E O R Y

or branching category of N´ (cf. tall girls, tall girls [with long hair] (with a PP), tall girls [who are talking at the table] (with a subordinate clause)). Thus, BDT assumes the constituent structure of NP in (), not that in ().9 ()

NP Art

N′

the ()

A

N

tall

girls

NP Art

A

N

the

tall

girls

As can clearly be seen in (), Art, which is a non-phrasal category, combines with the fully recursive phrasal or branching category N´, whereas A, which is not a fully recursive phrasal category, only combines with the non-phrasal or non-branching category N (or girls). This explains why Art and N form a correlation pair, whereas A and N do not.

10.4.4 Further thoughts on BDT There are some issues that remain unresolved for BDT. First of all, BDT makes limited predictions about the ordering of categories 9

The structure in () is what is assumed in pre-generative and earlier generative approaches, whereas that in () is the one adopted in later generative approaches.



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

which are all either phrasal or non-phrasal. For instance, the ordering of multiple phrasal categories within the same syntactic domain is a case in point. Given V (non-phrasal), NP (phrasal), and PrN/ NPo (phrasal), BDT predicts the relative position of either NP or PrN/NPo relative to V but that is as far as it can go. It cannot make further predictions about the ordering of NP and PrN/NPo because they are both phrasal categories. Which option will VO languages opt for, [V-NP-PrN] or [V-PrN-NP]? Or which option will OV languages prefer, [NPo-NP-V] or [NP-NPo-V]? A similar question can be raised about the ordering of multiple categories, e.g. multiple nominal modifiers (i.e. Dem, Num, Art, A, G, and Rel), verbal modifiers (i.e. Adv(erbial)P(hrases) such as Time, Place, Manner). Nor does BDT seem to be in a good position to account for languageinternal variation. For instance, [V-AdvPPLACE-AdvPTIME] and [V-AdvPTIME-AdvPPLACE] order may both be attested in some languages. In cases such as this, what is it that motivates the alternation between the two options in the same language? BDT does not seem to be able to say much about such language-internal alternation either. Moreover, the bifurcation into left-branching and rightbranching direction does not make it easy to explain in a principled manner word order correlations in which VO languages display a clear preference for X whereas OV languages are ambivalent about X and its competitor Y, i.e. a typological asymmetry (see Chapter ). A case in point is the correlation between VO/OV and NRel/RelN (e.g. ()). Thus, OV languages are ‘content’ with either RelN or NRel, as opposed to VO languages, which have a very clear predilection for NRel. Dryer (: ) is at pains to argue for the correlation between OV and RelN when he says ‘we do still have a correlation in the sense that RelN order is more common among OV languages than it is among VO languages, and conversely for NRel order’. Nonetheless the question remains: why do OV languages ‘tolerate’ NRel order when VO languages do not ‘tolerate’ RelN order at all? These issues will be deferred to §., where processing-based theories of word order are discussed. Lastly, it needs to be mentioned that Dryer () himself has scuttled BDT as he now prefers flat constituent structure to hierarchical constituent structure. Without hierarchical constituent structure, BDT will be rendered nearly useless (for further discussion, see Song : –). 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 5 W O R D O R D E R V A R I A T I O N A N D P R O C E S S I N G E F F I C I E N C Y

10.5 Word order variation and processing efficiency Hawkins () pursues a single simple principle of processing or what he refers to as the Principle of Early Immediate Constituents (PEIC) in order to account for cross-linguistic variation on word order. In this processing-based theory, word order correlations— and word orders in general—are claimed to reflect the way languages respond to the demands of rapid and efficient processing in real time. All other things being equal, a single global explanation that accounts for multiple phenomena should ideally be preferred to a multiplicity of explanations that deal with the same set of phenomena. In this respect, the PEIC is claimed to be superior to a set of principles such as proposed in Hawkins’s () own earlier work. 10.5.1 Basic assumptions of EIC Theory The PEIC is built upon the basic assumption that words or constituents occur in the orders that they do so that their internal syntactic structures (i.e. immediate constituents or ICs) can be recognized (and presumably also produced) in language performance as rapidly and efficiently as possible. This means that different permutations of constituents may give rise to different levels of structural complexity, which, in turn, have a direct bearing on the rapidity with which recognition of ICs is carried out in real time. Basic word order is then looked upon as conventionalization of the most optimal order in performance, that is, the ordering that maximizes efficiency as well as speed in processing. There are a few additional assumptions that EIC Theory makes in explaining basic word orders and their correlations. These assumptions, including the PEIC itself, can easily be explicated by examining a set of English sentences which contain a verb particle up. ()

a. b. c. d.

Jessica rang up the boy. Jessica rang the boy up. Jessica rang the boy in the class up. Jessica rang the boy whom she met in the class up. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

Leaving the (invariable) subject NP aside, the Verb Phrase (VP) in () is a constituent which, in turn, is made up of three ICs, i.e. V, (object) NP, and verb particle (Part). It is well known from the psycholinguistic literature (e.g. Hunter and Prideaux ) that the further the particle is moved from the verb, the less acceptable the sentence becomes, and the less frequent its occurrence. Hawkins explains this observation by arguing that the sentences in () present different degrees of structural complexity. For instance, (d) is more difficult to process than (a) because the former has a higher degree of structural complexity than the latter. To demonstrate this, he invokes the concept of a Constituent Recognition Domain (CRD). This structural domain is in effect invoked to quantify the minimum number of terminal and non-terminal nodes that must be taken into account for purposes of constructing the syntactic structure of a given constituent. Also crucial in this context is Hawkins’s notion of mother-node-constructing categories (MNCCs). MNCCs are those categories that uniquely determine the mother node of a constituent. For instance, V is the MNCC of VP; once V rang in () is parsed, it immediately identifies VP as the mother node of V. The MNCC of NP, on the other hand, is either Det or N, with either of these two included in the CRD of NP. For example, (object) NP in () can be uniquely determined by Det alone since processing is carried out from left to right—in N-initial (or N-Det) languages, N will function as the MNCC of NP. The concept of MNCCs is very important for Hawkins’s processing theory of word order because the PEIC assumes that ‘[a] mother node must be constructed when a syntactic category is recognized that uniquely determines the higher node [so that o]ther ICs [can] be attached as rapidly as possible to this higher node, thereby making parsing decisions about sisterhood and groupings proceed as fast and efficiently as possible’ (Hawkins : ). Thus, not all terminal and non-terminal nodes in a constituent need to be parsed in order to arrive at the construction of the overall syntactic structure of that constituent. Once all ICs are recognized, the constructing of the overall syntactic structure comes to completion, as it were, regardless of how many words may still remain to be processed in the last IC. With these background assumptions in mind, consider (a) and (c) again. These sentences have the following constituent structures in (a) and (b) respectively. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 5 W O R D O R D E R V A R I A T I O N A N D P R O C E S S I N G E F F I C I E N C Y

()

a.

S

NP

VP

N

V

Part

Jessica

rang

up

b.

NP Det

N

the

boy

S

NP N Jessica

VP V

NP

Part

rang Det N

up

PP NP

P the boy in Det the

N class

The two dashed lines drawn at a slant are meant to enclose the VP CRD in each of (a) and (b). In (a), the VP CRD covers from the first IC (i.e. V)—which constructs the mother node of V—to the MNCC of the last IC of NP (i.e. Det), whereas in (b), the CRD stretches from the first IC (i.e. V) all the way to up, dominated by the last IC (i.e. Part). In order to measure syntactic complexity, Hawkins () proposes a very simple metric based on the IC-to-non-IC ratio, which is derived by dividing the number of ICs in a CRD by the total number of non-ICs in that CRD. This ratio is converted into a percentage. The ratio for a whole sentence is then calculated by averaging 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

out the percentages of all CRDs contained in that sentence. In (), the subject NP can be left out of consideration for the sake of convenience since it is invariable throughout. As a shorthand for the IC-to-non-IC ratio, the number of ICs in a CRD is often divided by the total number of words in that CRD. This is referred to as an IC-to-word ratio.10 The higher the IC-to-non-IC (or -word) ratio for a CRD is, the more rapid and efficient the processing of a mother node and its ICs is claimed to be. Put differently, processing efficiency is optimized if and when all ICs are recognized on the basis of a minimum number of non-ICs (or words). In (a), there are three ICs for the VP CRD, namely V, NP, and Part, and four non-ICs in that CRD, namely rang, up, Det, and the. This gives rise to / or %. In (b), the ratio goes down to / or .%—fourteen non-ICs for the same three ICs. The difference in these EIC ratios is then taken to be responsible for the relative difference between (a) and (c) in terms of processing ease or efficiency. The IC-to-word ratios for (a) and (b) depict the same situation: / or % for (a), and / or .% for (b). Note that it is not the absolute percentage but the relative ranking of IC-to-non-IC (or -word) ratios that matters for the PEIC. Note that the sentence in (b) has a slightly worse IC-to-non-IC (or -word) ratio than (a) because the position of the object NP the boy increases the size of the VP CRD: an IC-to-non-IC ratio of / or %, or an IC-to-word ratio of / or %. The IC-to-non-IC (or -word) ratio for (d) is the worst because the intervening object NP contains a relative clause, thereby contributing even further to the distance between the first IC and the last IC of VP. As the reader can work out, the IC-to-non-IC (or -word) ratio for (c) is between those for (b) and (d), because the size of the intervening object NP in (c) is between those in (b) and (d). However, the processing efficiency of (c), for instance, can be improved dramatically if the particle up is shifted immediately to the right of the verb or to the left of the object NP (i.e. so-called Heavy NP Shift), as in: ()

Jessica rang up the boy in the class.

10 One practical advantage of using the IC-to-word ratio is that one does not have to worry about low-level internal constituent structure, which may vary from one theoretical framework to another (Hawkins : –).



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 5 W O R D O R D E R V A R I A T I O N A N D P R O C E S S I N G E F F I C I E N C Y

The reason for this positive change in processing efficiency is mainly the fact that the movement of the particle reduces the size of the VP CRD to a considerable extent. This is again illustrated in terms of the constituent structure in (). ()

S

NP

VP

N

V

Part

Jessica

rang

up

NP Det

N

the

boy

PP P in

NP Det

N

the

class

As can be seen in (), the CRD starts off from V and ends on the, thereby resulting in an improved IC-to-non-IC ratio of / or %, or an IC-to-word ratio of / or %. These ratios are, in fact, identical to those for (a), which has the most optimal ratio. From this kind of observation, Hawkins (: ) draws the following conclusion, under the name of the PEIC: () Principle of Early Immediate Constituents (PEIC) The human parser prefers linear orders that maximize the ICto-non-IC ratios of constituent recognition domains (CRDs). It also follows that basic word order reflects the most optimal EIC ratio in a conventionalized manner (Hawkins : –, –). Those linear orders that give rise to more rapid and more efficient structural recognition in processing are (more likely to be) grammaticalized as basic word orders across languages. Moreover, word order property X may co-occur with word order property Y, not Z, because of the processing or performance motivation for optimizing EIC ratios across these word order properties. Thus, the PEIC is claimed to 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

underlie not only basic word order patterns but also their correlations. For instance, as mentioned on more than one occasion (e.g. §. and §.), there is a very strong correlation between OV (e.g. V-final) and postpositions, and that between VO (e.g. SVO and V-initial) and prepositions. Dryer (: ) provides evidence in support of this correlation, as in Table .. Table 10.8 Distribution of NPo/PrN in V-final, SVO, and V-initial languages Afr

Eura

V-final&NPo

⃞

⃞

V-final&PrN





SVO&NPo



SVO&PrN

SEAsia&Oc

A-NG

NAm

SAm

Total

⃞

⃞

⃞

⃞

























⃞

⃞

⃞

⃞

⃞





V-initial&NPo















V-initial&PrN

⃞

⃞

⃞

⃞

⃞

⃞



The data exemplifying the dominant correlations in question are presented in ()–(). ()

Lezgian (Lezgic; Nakh-Daghestanian: Azerbaijan and Russia) [V-final&NPo] a. Alfija-di maq̴ala kx̂e-na Alfija-ERG article write-AOR ‘Alfija wrote an article.’ b. duxtur-rin patariw doctor-GEN.PL to ‘to doctors’

()

Thai (Kam-Tai; Tai-Kadai: Thailand) [SVO&PrN] a. khon níi kàt mǎa tua nán man this bite dog CLT that ‘This man bit that dog.’ b. kàp chaawbâan with villagers ‘with villagers’ 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 5 W O R D O R D E R V A R I A T I O N A N D P R O C E S S I N G E F F I C I E N C Y

() Welsh (Celtic; Indo-European: UK) [V-initial&PrN] a. Lladdodd draig ddyn killed dragon man ‘A dragon killed a man.’ b. yn y cōr in the choir ‘in the choir’ Dryer (: –) amplifies these correlations by observing that the position of the adpositional phrase (AdpP) coincides with that of O: AdpP before V in OV languages (i.e. / of his genera), and after V in VO languages (i.e. / of his genera), as in Table .. In fact, he (: ) reports that the pair of V and AdpP is the strongest correlation of any pair considered in his work. Table 10.9 Distribution of OV/VO order and adpositional phrase (AdpP) Afr

Eura

SEAsia&Oc

A-NG

NAm

SAm

Total

OV&AdpPV

⃞

⃞

⃞

⃞

⃞

⃞



OV&VAdpP















VO&AdpPV















VO&VAdpP

⃞

⃞

⃞

⃞

⃞

⃞



These two separate observations lead to the inference that OV may cooccur with preverbal NPo, and VO with postverbal PrN. This, however, cannot be tested directly on the basis of Dryer’s () data, which do not specify individual language quantities for the two adposition orders. Nonetheless the ternary correlation between the position of V, the position of the AdpP, and the distribution of adpositions is at least confirmed by the data from Hawkins’s () expanded sample and by the additional information provided by Dryer to Hawkins (: ), as in Table .. Hawkins () explains this correlation by arguing that in OV languages the preverbal postpositional phrase (NPo) gives rise to the most optimal EIC ratio for VP, whereas in VO languages the postverbal prepositional phrase (PrN) does so. To understand this, reconsider in terms of EIC the four orderings of verb and adpositional phrase in 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

Table 10.10 Verb position, adpositional phrase position, and adposition types Order

Dryer in Hawkins ()

Hawkins ()

[V [P NP]]

 (.%)

 (.%)

[[NP P] V]

 (.%)

 (.%)

[V [NP P]]

 (.%)

 (.%)

[[P NP] V]

 (.%)

 (.%)



Total



Table ., slightly differently formulated in (); (a) involves a postverbal prepositional phrase, (b) a preverbal postpositional phrase, (c) a postverbal postpositional phrase, and (d) a preverbal prepositional phrase. ()

a. b. c. d.

VP[V PrN[Pr

NP]] [ [NP Po] V] VP NPo VP[V NPo[NP Po]] VP[PrN[Pr NP] V]

Suppose one word is assigned to V and P each, which is a reasonable assumption. For the sake of simplicity, one word is also assigned to the whole NP; this single word is immediately dominated by a nonterminal node N. The size of the NP does not have any direct bearing on the point to be made here. The IC-to-word ratios for (a) and (b) are then the most optimal, that is, / or %. In (a) and (b), the ICs are V and AdpP (i.e. PrN or NPo), and these two are recognized on the basis of two adjacent words dominated by V and Pr or Po—Pr or Po being the MNCC for AdpP. In (c), the MNCC of AdpP can be parsed only after the associated NP has been parsed, and in (d), NP also delays processing of V. Thus, the size of the VP CRD is increased, whereby processing efficiency is reduced. For both (c) and (d), the IC-to-word ratio is / or .%, well below the optimal ratio for the other two orderings. Note that any increase of the NP in weight (e.g. a relative clause), complexity, or length will further decrease the already non-optimal EIC ratio of (c) and (d). The observation that Dryer has made about the pair of verb and adpositional phrase can thus be explained by EIC Theory. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 5 W O R D O R D E R V A R I A T I O N A N D P R O C E S S I N G E F F I C I E N C Y

10.5.2 Left–right asymmetry in word order Dryer (: –) observes an intriguing typological asymmetry in the ordering of the head noun and the relative clause in the world’s languages, as exhibited in Table . (see Chapter  on typological asymmetry): SVO, VSO, and VOS (or collectively, VO) correlate almost perfectly with NRel (i.e. VO  NRel), while SOV and OSV (i.e. Verbfinal) do not have any strong preference for either RelN or NRel. Table 10.11 Distribution of word order types and RC types Afr

Eur

SEAsia&Oc

Aus-NG

V-final&RelN



⃞





V-final&NRel

⃞





⃞

NAm

SAm

Total







⃞





SVO&RelN















SVO&NRel

⃞

⃞

⃞

⃞

⃞

⃞



V-initial&RelN















V-initial&NRel

⃞

⃞

⃞



⃞

⃞



The data exemplifying the dominant correlations are presented in ()–(): () Persian (Iranian; Indo-European: Iran) [SOV&NRel] a. hame-ye mo’allem-â ye shâgerd-i-ro mo’arefi all-EZ teacher-PL one student-INDEF-DO introduce kard-and did-PL ‘All teachers introduced a student.’ b. zan-i-râ [ke John be u sibe zamini dâd] woman-the-DO [REL John to her potato gave ‘the woman to whom John gave the potato’ () Japanese (isolate: Japan) [SOV&RelN] a. Kyoko-ga neko-o oikaketa Kyoko-NOM cat-ACC chased ‘Kyoko chased the cat.’ b. [Kyoko-ga oikaketa] neko Kyoko-NOM chased cat ‘the cat that Kyoko chased’ 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

()

Tetelcingo Nahuatl (Aztecan; Uto-Aztecan: Mexico) [SVO&NRel] a. sen-te tlɔkatl (Ø-)kɪ-pɪya-ya sen-te puro one-NUM man he-it-have-IPFV one-NUM burro ‘A man had a donkey.’ b. inu ɔcintli [tlɪ k-omwika-k] that water REL it-bring-PFV ‘that water that he brought’

()

Fijian (Oceanic; Austronesian: Fiji) [V-initial&NRel] a. e rai-ca a gone a qase SG see-TR ART child ART old.person ‘The old person saw the child.’ or ‘The child saw the old person.’ b. a pua’a [’eirau ’auta] ART pig [EXCL.DU bring] ‘the pig which we (two) brought’

In order to account for this typological asymmetry or left–right asymmetry as he calls it, Hawkins (: –, –) argues that the human parser may have the option of expediting ‘immediate matrix disambiguation’ (henceforth IMD), in the process of which EIC efficiency may be sacrificed in the interests of ‘early-decision making’ on the main-clause/subordinate-clause distinction, i.e. disambiguation in preference to EIC efficiency. To see how this works, consider the following four logically possible permutations, for instance (S´ = relative clause in the present case). ()

a. b. c. d.

VP[V NP[N

S´]] VP[NP[S´ N] V] VP[V NP[S´ N]] VP[NP[N S´] V]

In terms of EIC ratios, (a) and (b) are the most optimal, the former representing NRel order in VO languages, and the latter RelN order in OV languages. The other two permutations, in contrast, produce poor EIC ratios primarily because of the position of the restricting clause (or S´) in between the two other constituents, V and N. The permutation in (c) is indeed virtually unattested in VO or V-initial languages—the only exception in Dryer’s (, ) data being Chinese (and also Bai and Amis in Dryer e; also see §..). But the equally 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 5 W O R D O R D E R V A R I A T I O N A N D P R O C E S S I N G E F F I C I E N C Y

non-optimal permutation in (d) (i.e. NRel) is known to be commonly found in OV or V-final languages. Thus, the existence of NRel order in OV or V-final languages is problematic for EIC Theory, at least in the main. To deal with this left–right asymmetry, Hawkins (: ) appeals to IMD: V-final languages, unlike V-initial languages, have ‘regular opportunities for misanalysing main and subordinate clause arguments as arguments of the same verb’. For example, consider the following sentence with a relative clause from Japanese, a V-final or SOV language. () Japanese (isolate: Japan) shika-o] pat-PST zoo-ga NP[S[kirin-o taoshi-ta] elephant-SBJ giraffe-OBJ knock.down-PST deer-OBJ nade-ta ‘The elephant patted the deer that knocked down the giraffe.’ When () is processed online, the main clause argument zoo-ga is associated with the verb of the subordinate clause taoshi-ta so that it will initially be interpreted as ‘the elephant knocked down the giraffe’. When the rest of the sentence is also subsequently processed, the initial wrong interpretation has to be revised to the correct one, i.e. garden path effect or backtracking. Thus, in V-final languages there is a processing need for immediate disambiguation between the main and subordinate clause, that is, IMD (Hawkins : –). In some OV or V-final languages, this IMD option actually is taken up, whereby the EIC-optimal permutation in (b) can optionally be rearranged structurally to (d)—from RelN to NRel. To put it differently, immediate recognition as a subordinate clause of the restricting clause can be achieved by bringing the head noun forward to the onset of the relative clause: having processed the arguments of the main clause, the human parser expects to deal with, for instance, the verb of the main clause but when faced with another batch of arguments s/he will take them to be part of the subordinate clause. This is then claimed to be the reason why NRel order is also commonly attested in OV or V-final languages, despite the fact that the rearrangement gives rise to a reduction in EIC efficiency (but cf. Song : – for discussion of the non-IMDinduced shift from RelN to NRel in SOV languages). In contrast, the EIC-optimal permutation in (a), which is adopted predominantly by VO or V-initial languages, does not suffer from the issue of IMD since the head noun is associated immediately with the right verb. At this juncture, it must be noted that IMD is merely an addendum 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

to Hawkins’s EIC Theory, and the status of this addendum is not made clear. As will be shown in §., however, IMD has evolved into a fullyfledged principle, on an equal footing with the revised version of the PEIC (Hawkins , ).

10.6 Structural complexity and efficiency In his  book, Hawkins makes an attempt to improve on his earlier EIC Theory at least in two important respects. First, Hawkins () departs from his earlier single-principle-based theory of word order by proposing as many as three principles. In Hawkins (), thus, various kinds of dependency (e.g. syntactic, semantic, lexical), not just the PEIC, are also recognized as an equally important determinant of word/constituent ordering. Second, Hawkins’s () revised theory is much more general and far-reaching in terms of empirical coverage and in terms of proposed principles. Thus, Hawkins () deals with not only word order patterns and correlations but also other non-wordorder phenomena such as relativization, filler-gap dependency (e.g. whquestion/movement), head vs dependent marking, antecedent-anaphor co-indexation. To wit, Hawkins () has his sights set on grammar as a whole, not just on word order. Not surprisingly, his new principles are formulated in a general (or abstract) enough manner to be able to handle not only word order but also other diverse grammatical phenomena. These differences notwithstanding, Hawkins () does not depart from the conceptual position of his  work: basic word order is the conventionalization or grammaticalization of the ordering that maximizes efficiency and speed in processing. In Hawkins (), of course, this position no longer applies only to word orders but also to many other areas of grammar. So much so that ‘[g]rammars are “frozen” or “fixed” performance preferences’ (Hawkins : ). This idea is embodied in his Performance–Grammar Correspondence Hypothesis (Hawkins : ): ()

Performance–Grammar Correspondence Hypothesis Grammars have conventionalized syntactic structures in proportion to their degree of preference in performance, as evidenced by patterns of selection in corpora and by ease of processing in psycholinguistic experiments. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 6 S T R U C T U R A L C O M P L E X I T Y A N D E F F I C I E N C Y

To wit, the human parser processes linguistic expressions of varying degrees of complexity, be it structural, semantic, or otherwise, in an efficient and rapid manner, as depicted by various processing-based principles. 10.6.1 Processing principles and processing domains In Hawkins (), there are a number of principles that capture how the human processor achieves efficiency in processing language. These efficiency principles are all performance-based in the sense that they are formulated to explain grammatical phenomena such as word order in terms of performance, in particular processing. While leaving open the question of further principles and sub-principles, Hawkins (: ch. ) proposes three main principles: Minimize Domains (MiD), Maximize On-line Processing (MaOP), and Minimize Forms (MiF).11 Note that the third principle, MiF, will not be taken up here because it has little to do with word order patterns and their correlations. MiD is descended from Hawkins’s () PEIC (Hawkins : –; see §..), although it is now defined in much more general or less domain-specific terms, as can be seen from (). () Minimize Domains (MiD) The human processor prefers to minimize the connected sequences of linguistic forms and their conventionally associated syntactic and semantic properties in which relations of combination and/or dependency are processed. The degree of this preference is proportional to the number of relations whose domains can be minimized in competing sequences or structures, and to the extent of the minimization difference in each domain. Note that the domain of minimization is not just constituent-based, as in the PEIC (i.e. IC-to-non-IC ratios), but it is described in inclusive 11 For instance, Hawkins () proposes one additional principle, Argument Precedence, in his attempt to account for the ordering of VO/OV and X (i.e. AdpP):

(i)

Argument Precedence (AP) Arguments precede X Thus, AP is said to interact with MiD, ‘pushing [non-argument NPs] to the right of [argument NPs], even in VO languages’.



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

terms (i.e. ‘domains’ where various relations of combination and/or dependency are processed). These relations include not only constituent-structure-based relations, as prominently featured in Hawkins’s () EIC Theory, but also other syntactic or lexical co-occurrence properties (e.g. so-called strict subcategorization), and lexical, syntactic, or semantic dependencies. With MiD defined in such an open-ended manner, the sole domain used in calculating EIC ratios in Hawkins (), i.e. Constituent Recognition Domain (CRD), is no longer sufficient. Thus, Hawkins () invokes one more type of domain, i.e. Lexical Domain (LD). Note that there could potentially be more domains, as relations of combination and dependency do not include just syntactic and semantic relations. As other kinds of relation (e.g. pragmatic relations) are identified, more domains may need to be introduced (Hawkins : –). The new kind of domain is defined as: ()

Lexical Domain (LD) The LD for assignment of a lexically listed property P to a lexical item L consists of the smallest possible string of terminal elements (plus their associated syntactic and semantic properties) on the basis of which the processor can assign P to L.

Lexically listed properties include information on the syntactic category of L (be it a noun, a verb, or a preposition), the lexical co-occurrence frame(s) of L (i.e. strict subcategorization, e.g. transitive verbs requiring object NPs), so-called selectional restrictions (e.g. the verb smoke requires a human subject and an object such as tobacco or marijuana), syntactic and semantic properties assigned by L to its complements (e.g. a case relation and a semantic role assigned by a transitive verb to a direct object NP), and ambiguous or polysemous meanings assignable to L (e.g. cut his finger/cut a path (through the field)/cut his whisky with water/cut twenty CDs). Hawkins (: ) also renames the CRD as the Phrasal Combination Domain (PCD), defining it as follows: ()

Phrasal Combination Domain (PCD) The PCD for a mother node M and its I(mmediate)C(onstituent) s consists of the smallest string of terminal elements (plus all M-dominated non-terminals over the terminals) on the basis of which the processor can construct M and its ICs. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 6 S T R U C T U R A L C O M P L E X I T Y A N D E F F I C I E N C Y

The principle of MiD is designed to explain the way PCDs and LDs are minimized in order to achieve processing efficiency. The way MiD works on a PCD is akin to the way the PEIC works on a CRD, as illustrated in §... For instance, given two possible structures, MiD prefers the one that has the smaller PCD. The same explanation given for the PEIC over the CRD can be carried over to the PCD, and does not need to be repeated here. Similarly, the human processor, given the principle of MiD, prefers the smallest possible LD. For instance, an LD in which two lexically associated expressions (underlined in the two examples to follow) are adjacent to each other is smaller than an LD in which they are not adjacent to each other (e.g. The man depended on his only child in his old age vs ?? The man depended in his old age on his only child). The other relevant performance-based principle is MaOP, which is defined (Hawkins : ): () Maximize On-line Processing (MaOP) The human processor prefers to maximize the set of properties that are assignable to each item X as X is processed, thereby increasing O(n-line) P(roperties) to U(ltimate) P(roperties) ratios. The maximization difference between competing orders and structures will be a function of the number of properties that are unassigned or misassigned to X in a structure/sequence S, compared with the number in an alternative. As the reader will recall from §.., Hawkins () invoked IMD in order to explain the asymmetry found in the distribution of RelN vs NRel in VO and OV languages. IMD is now elevated to MaOP, a general principle on a par with MiD (and MiF), not just an addendum to the PEIC, localized for individual languages in Hawkins (). Moreover, Hawkins’s () MaOP is rendered more general than his earlier IMD, because while IMD concerns mainly misassignment of properties (e.g. garden-path effect), MaOP covers not only misassigned properties but also unassigned properties, that is, properties the assignment of which is delayed for one reason or another. As in the case of MiD, unassigned or misassigned properties include words and phrases, their phrase–structure relations, and any syntactic or semantic relations of combination and dependency, although misassigned properties also include properties that need to be introduced when misassigned structures, or relations of combination and dependency are replaced for correct interpretation. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

10.6.2 Word order and word order correlations in Hawkins’s extended theory The two performance-based principles, MiD and MaOP, are used in Hawkins () to account for various word order patterns and their correlations. These word order patterns and correlations do not need to be explained in detail all over again, albeit by means of the two principles (i.e. MiD and MaOP) instead of one (i.e. PEIC). Suffice it to give an illustration of three examples: (i) NPo vs PrN in OV and VO languages; (ii) the asymmetry in RelN/NRel ordering; and (iii) the symmetry between OV and VO order. These selected examples will, in turn, be discussed by means of (the interaction between) MiD and MaOP. NPo vs PrN in OV and VO languages

In §.., it was discussed that VO languages prefer to have prepositional phrases (PrN), whereas OV languages prefer postpositional phrases (NPo). Conversely, VO languages tend not to co-occur with NPo, and OV languages tend to avoid having PrN. The data in support of these correlations are provided in Table .. These correlations are now given a slightly different interpretation in Hawkins (), albeit more or less in the spirit of Hawkins’s () EIC Theory. In [V [Pr NP]] ordering, the PCD is minimized because of the adjacency of V and Pr, whereas in the alternative ordering of [V [NP Po]], the adjacency of V and Po is not created because of the intervening NP. Thus, MiD prefers [V [Pr NP]] to [V [NP Po]] in VO languages, thereby explaining the correlation between VO and PrN. Also, there are other relations of combination and dependency, semantic or otherwise, between V and Pr. In the widely attested [V [Pr NP]] ordering, the LD is also minimized because of the adjacency of V and Pr. To wit, the adjacency of V and Pr ensures the minimization of both PCDs and LDs. Note that there are no unassigned or misassigned properties in this ordering. In the infrequently attested [V [NP Po]] ordering, in contrast, the PCD and LD are not minimized because of the non-adjacency of V and Po. Moreover, the needs of MaOP are not well served either, because whatever relations of combination and dependency may exist between V and Po have to be delayed in processing until the appearance of Po, that is, after the intervening NP. The same reasoning works 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 6 S T R U C T U R A L C O M P L E X I T Y A N D E F F I C I E N C Y

in the case of the distribution of NPo vs PrN in OV languages. In the widely attested [[NP Po] V] ordering, the PCD and LD are both minimized in satisfaction of MiD and also MaOP, with no unassigned or misassigned relations delaying comprehension. In the infrequently attested [[Pr NP] V] ordering, the PCD and LD are not minimized as much as possible, thereby failing to comply with MiD (i.e. the intervening NP enlarging both the PCD and the LD). Again, the needs of MaOP are not met, because of the delay in the assignment of relations and properties caused by the non-adjacency of V and Pr. The asymmetry in RelN/NRel ordering

The distribution of NRel vs RelN in VO and OV languages has been identified in Hawkins (: ch. ; also see Hawkins : –) as a typological asymmetry in the sense that while NRel and NRel are both attested in OV languages, NRel seems to be the only viable option in VO languages. The relevant asymmetry is presented in summary form: () a. VO & NRel/*RelN b. OV & NRel/RelN In terms of Hawkins’s () extended theory, the co-occurrences of VO&NRel and OV&RelN (i.e. NRel or RelN being assumed to be O here) involve minimal PCDs and LDs, i.e. [V-[N-Rel]] or [[RelN]-V]. In both sequences, V and N abut on each other, facilitating MiD over PCDs and LDs. In the case of PCDs, the two ICs of VP, V and object NP, are recognized efficiently and early. In the case of LDs, relations of combination and dependency (e.g. the case and semantic role of the relativized head NP within the matrix clause) are also assigned without any delay, satisfying MaOP. The (near) nonoccurrence of VO&RelN can thus be imputed to the non-minimization of PCDs and LDs, caused by none other than the non-adjacency of V and N (i.e. [V-[Rel-N]]). Thus, VO&RelN fails to satisfy not only MiD (i.e. non-adjacency of V and N, enlarging the PCD) but also MaOP, because the assignment of relevant relations of combination and dependency between V and N has to be delayed until after the intervening (complex) relative clause has been processed fully. The same reasoning should apply also to OV&NRel (i.e. [O[N-Rel]O-V]), with non-minimal PCDs and LDs as well as a delay in the assignment of relations, but interestingly enough, this sequence has turned out to be robustly attested in the world’s languages (e.g. Dryer ; see 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

Table .). Within [O[Rel N]O-V] ordering, the initial part of the relative clause, e.g. the object NP of the relative clause, kirin-o, in () above, is likely to be misanalysed as part of the matrix clause (e.g. object NP of the matrix clause in ()). Put differently, properties of the relative clause, i.e. words, phrases, relations of combination and dependency, are misassigned to the matrix clause. Thus, in OV languages, while MiD favours RelN ordering (because of minimal PCDs and LDs), MaOP does not do so entirely. If the relative clause is shifted to the right of N (i.e. NRel ordering), MaOP will be obeyed, because the head noun of the relative clause will be immediately recognized as part of the matrix clause, obviating the very kind of misassignment evident in the OV&RelN combination. However, while NRel ordering satisfies the needs of MaOP this way, it does little for MiD, because the adjacency of N and V is now destroyed by the intervening relative clause. To wit, in OV, as opposed to VO, languages, MiD and MaOP conflict with each other, and this conflict is resolved by OV languages opting for either RelN or NRel ordering. The net outcome of this resolution is that either MiD or MaOP is satisfied, not both. If MiD has priority over MaOP, PCDs and LDs are minimized at the expense of early assignment of relations of combination and dependency. If, in contrast, MaOP is favoured over MiD, misassignment of relations of combination and dependency is avoided at the expense of minimal PCDs and LDs. The way MiD and MaOP are complied with (+) or not (–) is summarized for the four possible permutations of VO/OV and NRel/RelN, as in: ()

MiD MaOP a. VO&NRel + + b. VO&RelN – – c. OV&RelN + – d. OV&NRel – +

To wit, MiD and MaOP can either reinforce or militate against each other, as in (a) or (c and d), respectively. The symmetry between OV and VO order

The ordering of V and O is found to be symmetrical in the sense that both OV and VO ordering are more or less equally productive in the world’s languages. Hawkins (: ) summarizes a number of word order surveys in support of these two orderings as a syntactic symmetry: 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 6 S T R U C T U R A L C O M P L E X I T Y A N D E F F I C I E N C Y

() Greenberg (b) Hawkins () Tomlin () Dryer () Nichols ()

OV % % % % %

VO % % % % %

[languages & families] [languages & families] [languages] [genera] [languages]

In fact, the productivity of OV and VO order is evident directly from the preponderance of SOV and SVO order in the world’s languages. By one count (Tomlin ), these two word orders alone—i.e. S OV = .% and S VO = .%—account for about almost % of the world’s languages. Either O or V can be used to construct VP (Hawkins : –, : ); V is the MNCC of VP, whereas O is constructed by its MNCC, e.g. accusative case marker, which, in turn, also constructs its higher dominating or ‘grandmother’ node, namely VP. Thus, in terms of MiD, both OV and VO ordering are equally optimal because of their minimal PCDs. In terms of MaOP, O depends on V for assignment of case and semantic role as well as for the construction of VP itself (i.e. V being the head of VP), which O ultimately attaches itself to. V, in turn, depends on O for property assignment, i.e. selection of right syntactic and semantic co-occurrence frames (e.g. transitive vs intransitive, as in Mary swam vs Mary swam the race), and selection of correct semantics of V among possible alternative options (e.g. run the race/the bath/the company/the story/guns and drugs). Thus, there is a symmetrical dependency between V and O. What this interdependence means for MaOP is that in VO, the property assignment of V is delayed until the appearance of O, whereas in OV, the property assignment of O is delayed until the appearance of V. Either way, therefore, there is a delay in property assignment, which, in turn, is claimed to explain the productivity of both OV and VO ordering in the world’s languages. Hawkins (: ; originally Hawkins ) puts forward the hypothesis that OV and VO languages may make different structural responses to this kind of delay (see Song : –). Where the property assignment of V is temporarily put on hold because of O’s postverbal position (i.e. in VO languages), verb agreement (i.e. head marking) may be chosen to alleviate the effects of the delay, and where the property assignment of O is delayed (i.e. in OV languages), use of case marking (i.e. dependent marking) may be expected to mitigate the delay. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

10.7 Areal word order typology One of the perplexing things about the world’s languages, especially in the early stage of post-Greenberg linguistic typology, is the existence of a small number of exceptions to what would otherwise be absolute language universals (see §.). The fact that these universals capture strong linguistic tendencies undoubtedly suggests that they represent something fundamental to human language. If so, why do we find any exceptions? One well-publicized example of this kind comes from Dryer’s work (e.g. , e): SVO, VSO, and VOS (or collectively, VO order) correlate almost perfectly with NRel order (i.e. VO  NRel). As discussed in §.., however, the ‘thorn in the side’, as it were, of this language universal is a single disconfirming genus appearing in the middle section of Table . (i.e. SVO&RelN), under SEAsia&Oc. This exception is represented by Chinese (i.e. Mandarin, Cantonese, and Hakka); Dryer (e) subsequently adds Bai (a Tibeto-Burman VO language spoken in Yunnan, China) and Amis (an Austronesian language of Taiwan) as two additional exceptions to the near-universal VO&NRel correlation (also see Sposato  on SVO&RelN in MiaoYao (aka Hmong-Mien) languages).12 In view of the overwhelming preponderance of NRel among VO languages, however, we cannot but acknowledge some kind of linguistic (or cognitive) motivation for NRel order in VO languages (e.g. Hawkins , , ). But if there is such a clear, distinct motivation, linguistic or cognitive, for VO&NRel, why do we find any exceptions at all (even if in an extremely small number)? Basically, there are two ways of responding to this situation. One can brush exceptions aside (as statistical noise or as being due to chance) and move on to explain strong linguistic tendencies. Alternatively, one can make an effort to find out how and why such exceptions have come into being as well as to explain near-perfect language universals. Linguistic typology has taken the latter path, coming to the conclusion that exceptionless universals are very hard to come by, not because of the impossibility of perfect language universals (read: the 12 Mallinson and Blake (: ) identify Palauan as another VO language with RelN order. Guglielmo Cinque (personal communication) also points to three more Austronesian languages spoken in Taiwan that combine VO with RelN order, namely Bunun (Holmer n.d.), Tsou (Holmer, n.d.), and Budai Rukai (Sung, n.d.). See also Sposato () for further instances of VO&RelN from Miao-Yao (Hmong-Mien) languages.



OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 7 A R E A L W O R D O R D E R T Y P O L O G Y

implausibility of linguistic or cognitive explanations) but because of other external factors at work in language (e.g. population movement, sociocultural contact, geographical isolation, population size, environment; e.g. Weinreich ; Masica ; Dryer ; Nichols ; Nettle and Romaine ; Bickel ; Trudgill ). Thus, linguistic typology has come to the view that exceptions may not necessarily be ‘genuine’ exceptions to language universals but they may be the outcome of external factors impinging upon the latter. That is to say that but for language contact, for instance, such exceptions would not have arisen. This change in perspective on language universals, however, does not mean that linguistic typology has lowered its own expectations, as it were. Far from it. By determining how and why individual exceptions have come into existence, linguistic typology actually is in a better position to strengthen—by making full sense of such exceptions—the validity of proposed explanations as well as language universals themselves. For instance, Dryer (: –; cf. Hashimoto ) attributes the presence of RelN in Chinese, a VO language, directly to the (linguistic) influence, through prolonged contact, from its northern neighbours (i.e. OV languages such as Mongolian and Tungus). The other two exceptions, Bai and Amis, may also have, despite their VO order, ended up with RelN order because of their propinquity to Chinese (not to mention the sociocultural contact among the speakers of these languages). What this implies for the correlation between VO and NRel is that the correlation truly is a language universal of the highest possible order, which may only have been ‘tampered’ with—not vitiated — by language contact (i.e. so-called Lamarckian inheritance). Without asking (and answering) how and why exceptions exist, it would be very easy to fall into the trap of treating RelN order in Chinese, Bai, and Amis as genuine exceptions, casting aspersions on the validity as a language universal of the VO&NRel correlation itself. Language contact—which, incidentally, has a long history and tradition of its own in linguistics (e.g. see Thomason and Kaufman ; Thomason ; Winford )—is one of the major external factors that have been shown to have a great impact upon linguistic structure and its areal distribution. Indeed, a good deal of focus has recently been placed, within linguistic typology (under the purview of what has come to be known as areal typology), on understanding how languages may come to share linguistic properties with each other through contact. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

Probably the most tangible outcome of this ‘new’ direction in linguistic typology is The World Atlas of Language Structures (Haspelmath et al. , also available as Dryer and Haspelmath , with some updates, at www.wals.info). For instance, Dryer (e) demonstrates that the co-occurrence of OV and RelN is generally found in Asia whereas in the rest of the world OV languages have NRel much more frequently than RelN. To wit, the co-occurrence of OV&RelN seems to be a distinct areal feature of Asia. (In a way, then, it does not come as a total surprise that Chinese, albeit a VO language, has ended up with RelN in the midst of OV&RelN languages in Asia.) Observations such as this can indeed be gleaned to a certain extent from Dryer’s earlier work (e.g. Dryer , , ). But the WALS explains the global distribution of various linguistic properties, including word order, in much more detailed, lowlevel geographical terms. Therefore, discussion of the WALS (word order) research deserves some space here. In this section, some of the word order correlations discussed earlier in this chapter will be examined, in the light of Dryer’s WALS research, with a view to determining how different correlations (as well as exceptions to those correlations) may be distributed geographically in the world. Needless to say, this kind of areal typology will lay the groundwork for the kind of external explanation that Dryer () puts forth for the three exceptions to the near-universal VO&NRel correlation. 10.7.1 Areal distribution of the six clausal word orders Dryer (a) illustrates how the six clausal word orders are distributed geographically around the world. His data come from , languages (i.e.  for SOV,  for SVO,  for VSO,  for VOS,  for OVS,  for OSV, and  without any dominant word order). First of all, SOV, the most frequent word order type, is widely distributed around the world, with Asia (except in South East Asia and the Middle East) as its best exemplifying region. This word order type is also dominant in New Guinea and Australia (in the case of the latter, among languages with a dominant word order). Overall, SVO languages are attested predominantly in three regions, namely (i) a large area from China and South East Asia further south into Indonesia and the western Pacific, (ii) much of sub-Saharan Africa, and (iii) Europe and around the Mediterranean. Outside these three regions, SVO languages do not seem to be commonly found. VSO and VOS languages are both 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 7 A R E A L W O R D O R D E R T Y P O L O G Y

scattered around the world although the latter have not been attested on mainland Africa and Eurasia. Seven of the eleven OVS languages are found in South America, with the other two in Australia, one in Africa, and one in the Pacific. Two of the four OSV languages are spoken in Brazil, with one OSV language each in Australia and West Papua (Indonesia). For unknown reasons, South America—Brazil in particular— seems to be the world’s stronghold of O-initial languages. 10.7.2 Areal distribution of OV and VO Dryer’s (b) data for this areal distribution come from , languages (i.e.  for OV,  for VO, and  for both OV and VO without either being dominant). The areal distribution of OV languages is similar to that of SOV languages. In particular, in the Americas OV is the dominant order with the exception of Mesoamerica and the Pacific North West, both of which are found to be almost exclusively VO. VO languages are found—in addition to Mesoamerica and the Pacific North West—in Europe, North Africa, and the Middle East (among Semitic languages). VO order is also dominant in Africa as well as in China, South East Asia, the Philippines, and the Pacific. Languages without any dominant OV or VO order are spoken in Australia, North America, and eastern and western-central Africa. 10.7.3 Areal distribution of OV/VO and NPo/PrN The number of languages surveyed in Dryer (d) for this areal distribution comes to , (i.e.  for OV&NPo,  for OV&PrN,  for VO&NPo,  for VO&PrN, and  for non-classified). First, OV&NPo, one of the two more common co-occurrence types, is commonly found in Asia (except in South East Asia and the Middle East), New Guinea, and North America (except in the Pacific North West and Mesoamerica). It is attested as the more frequent type also in Australia and parts of Africa. The other more frequent cooccurrence type, i.e. VO&PrN, is common in Europe, sub-Saharan Africa, South East Asia (extending into the Pacific), the Pacific North West, and Mesoamerica. In contrast, VO&NPo, one of the two less common co-occurrence types, is found in an area in West Africa, northern Europe, and South America (albeit exemplifying languages scattered throughout South America). Finally, the least common 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

co-occurrence type, i.e. OV&PrN, is found sporadically in Africa and also among Iranian languages spoken in Iran, Iraq, and Tajikistan. Mention must be made of Dryer’s (d) intriguing observation to the effect that the two less common co-occurrence types are located in close proximity to or between the two more common co-occurrence types. For instance, the VO&NPo languages in West Africa are sandwiched between OV&NPo languages to the west and north-west, and VO&PrN languages to the east. Similarly, the OV&PrN languages in and near Iran are located between OV&NPo languages to the north and east, and VO&PrN languages to the south and west. It is as if the languages of the less common co-occurrence types had taken on a property each from those of the two more common co-occurrence types surrounding them. Observations such as this, made in the context of areal typology, make a valuable contribution to the understanding of exceptions to what would otherwise be perfect or at least near-perfect language universals. 10.7.4 Areal distribution of OV/VO and RelN/NRel Dryer’s (e) data for this areal distribution come from  languages (i.e.  for OV&RelN,  for OV&NRel,  for VO&RelN,  for VO&NRel, and  for non-classified). First, OV&RelN languages are attested very commonly in Asia (except in an area from the Middle East, through Afghanistan, to eastern China and South East Asia). Outside Asia, OV&RelN languages are not found to be common. So much so that, as pointed out earlier, OV&RelN is regarded as an areal feature of Asia. OV&NRel languages are commonly found in parts of North America, north-western South America, parts of Africa (where OV languages are common), and Australia. In fact, this distributional observation leads Dryer (e) to proclaim that OV languages in Asia generally have RelN order, whereas OV languages elsewhere in the world are much more likely to have NRel order. Because most VO languages have NRel, the areal distribution of VO&NRel languages closely resembles that of VO languages (§..). As already noted on more than one occasion, VO&RelN languages are found only in mainland China (i.e. Chinese and Bai) and Taiwan (i.e. Amis), and these three—or five if Mandarin, Cantonese, and Hakka are counted separately—languages are influenced directly or indirectly by OV&RelN languages to the north (see also n. ). 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

10 . 8 C O N C L U D I N G R E M A R K S

10.8 Concluding remarks It may not be inaccurate to say that word order is where linguistic typology as we know it today started. Indeed Greenberg’s (b) seminal work on word order ‘opened up a whole field of [typological] research’ (Hawkins : ) by revamping and revitalizing linguistic typology, which had until then been ignored more or less completely in linguistics (see §.). Greenberg’s pioneering work pointed not only to word order patterns (especially in terms of frequency) but also to the existence of correlations between different word order properties. This line of inquiry has been pursued by linguistic typologists to a greater extent than by linguists of other theoretical orientations (see Song ). Dryer’s large-scale research, in particular, has produced a considerable amount of statistically tested data on word order, and in so doing he has not only strengthened the empirical validity of many universal statements on word order but also made it possible to reevaluate or reject other universal statements (e.g. SOV vs SVO, AN vs NA in OV vs VO languages). On a theoretical front, Hawkins (, , ) has developed a most powerful performance-based theory of word order, arguing that constituents occur in the orders that they do so that their internal syntactic structures can be recognized in language performance as rapidly and efficiently as possible. By the same token, word order property X co-occurs with word order property Y, not Z, because of the performance motivation for maximizing processing efficiency across these word order properties. This is indeed the most exciting avenue for making sense of word order patterns and correlations, and will continue to be so. Moreover, in the (mid- or late) s there was this realization among linguistic typologists that in order to discover language universals (i.e. the nature of human language), what happens to be there in languages as a consequence of external or nonlinguistic factors must first be teased out carefully from what is motivated by linguistic factors. There are always bound to be external factors at work in language (e.g. sociocultural contact), inevitably impinging on the shape or form of language universals. Indeed, when exceptions are addressed on their own terms, as illustrated in §. with respect to word order patterns and correlations, the conceptual as well as empirical strength of proposed language universals is by no means vitiated but rather increased to a greater extent than otherwise. 

OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

BASIC WORD ORDER

Study questions 1. Hawkins (: –) claims that some OV languages may choose to shift from RelN to NRel in order to prevent part of the relative clause from being misinterpreted as belonging to the main clause while other OV languages may choose to put up with this misinterpretation. This, in turn, is said to explain why both RelN and NRel are attested more or less equally commonly in OV languages. Consider the Japanese sentence in (i), as opposed to the example in (), and determine what kind of misinterpretation the shift from RelN to NRel would cause for (i). Note that in (i) RelN functions as the subject NP—as opposed to the object NP in ()—of the main clause. Also discuss what implications your answer may have for Hawkins’s claims. (i)

Japanese (isolate: Japan) taoshi-ta] zoo-ga] shika-o nade-ta giraffe-OBJ knock.down-PST elephant-SBJ deer-OBJ pat-PST ‘The elephant that knocked down the giraffe patted the deer.’

NP[S[kirin-o

2. Dryer (: –) observes that, while both initial and final complementizers (Comp) are found in OV languages, final Comp is unattested in VO languages (but cf. Song : ). In other words, (i), (ii), and (iv) are attested, but (iii) is not. (Assume the combination of Comp (e.g. that) and the complement clause S (e.g. Amy is going to leave town) has an object relation within VP (e.g. We know that Amy is going to leave town). (i) VP[V O[Comp S]O]VP (ii) VP[O[S Comp]O V]VP (iii) *VP[V O[S Comp]O]VP (iv) VP[O[Comp S]O V]VP Do you think that BDT is able to explain this typological asymmetry? If so, how? If not, why not? Will Hawkins’s EIC/extended theory be able to handle the asymmetry? 3. Based on the data in Table . (Dryer : ), determine what correlates with what, and try to account for the correlation(s) that you have determined by using Hawkins’s EIC and/or extended theory. For the sake of simplicity, assume one word each for V, G, and N, and also O = GN or NG. Table 10.12 Distribution of GN/NG in OV and VO languages Afr

Eura

OV&GN

⃞

⃞

OV&NG



VO&GN VO&NG

SEAsia&Oc

A-NG

NAm

SAm

Total

⃞

⃞

⃞

⃞





















⃞



⃞



⃞

⃞

⃞



⃞







OUP CORRECTED PROOF – FINAL, 21/11/2017, SPi

FURTHER READING

4. Do you think that BDT is able to account for the correlations that you have determined between OV/VO and GN/NG in Question ? 5. One of the advantages of Hawkins’s EIC/extended theory is its ability to account for language-internal variation involving competing structures. For instance, structure X may have a better EIC ratio than its alternative structure Y, on the basis of which the prediction can be made that X will occur more frequently than Y. Consider the following two pairs of alternative expressions in English, calculate the EIC ratio for VP in each of them, determine which alternative expressions will occur more frequently, and search Google to find out how many ‘results’ each has. (Enclose the expressions within double quotation marks (“. . .”) when searching Google.) Do the Google results support your frequency prediction? Change turn and close to turned and closed, respectively, and do a similar Google search. (i)

a. turn on the light b. turn the light on.

(ii)

a. close down the company b. close the company down.

Further reading Dryer, M. S. (). ‘The Greenbergian Word Order Correlations’, Language : –. Greenberg, J. H. (). ‘Some Universals of Grammar with Particular Reference to the Order of Meaningful Elements’, in J. H. Greenberg (ed.), Universals of Language. Cambridge, MA: MIT Press, –. Hawkins, J. A. (). A Performance Theory of Order and Constituency. Cambridge: Cambridge University Press. Hawkins, J. A. (). Efficiency and Complexity in Grammars. Oxford: Oxford University Press. Hawkins, J. A. (). Cross-linguistic Variation and Efficiency. Oxford: Oxford University Press. Song, J. J. (). Word Order. Cambridge: Cambridge University Press.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 Case alignment

11.1 Introduction



11.2 S-alignment types



11.3 Variations on S-alignment



11.4 Distribution of the S-alignment types



11.5 Explaining various case alignment types



11.6 The Nominal Hierarchy and the split-ergative system



11.7 Case marking as an interaction between attention flow and viewpoint



11.8 P-alignment types



11.9 Distribution of the P-alignment types



11.10 Variations on P-alignment



11.11 S-alignment and P-alignment in combination



11.12 Case alignment and word order



11.13 Concluding remarks



11.1 Introduction As can be seen from Chapter , one of the most studied word order patterns is found at the clause level, namely the ordering of S(ubject), O(bject), and V(erb) (i.e. SOV, SVO, VSO, etc.). As mentioned in that chapter, one of the major functions of word order at the clause level is to indicate ‘who is doing X to whom’. This can easily be demonstrated by comparing the two English sentences in () and (). () The girl kicked the boy. () The boy kicked the girl.

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 1 I N T R O D U C T I O N

The roles of the arguments the girl and the boy in () are different from those of the same arguments in () despite the fact that these two sentences contain exactly the same (number of) words and constituents. By roles is meant the semantic relationship that holds between the arguments and the verb, and also between the arguments themselves. In () the girl is the kicker, and the boy is the kickee. In () the roles of these arguments are reversed, with the boy being the kicker, and the girl being the kickee. This difference in the argument roles in () and () is, of course, signalled directly by the difference in the relative positioning of the arguments. The preverbal argument is interpreted to be the one who carried out the act of kicking, whereas the postverbal argument is understood to be the one who suffered the kicker’s action. Neither the arguments nor other elements in the sentence—the verb in this case— bear any coding whatsoever that may indicate the semantic relationship in question. In languages such as English the clausal word order is relatively fixed and exploited to a great extent for the purpose of indicating ‘who is doing X to whom’. Word order is not the only way of expressing ‘who is doing X to whom’. In many languages, there are other grammatical or formal mechanisms in use for coding such roles as reflected in () and (). These mechanisms may involve morphological forms (e.g. affixes) or function words (e.g. adpositions) that express the argument roles within the clause. This type of role coding is referred to broadly as case marking in the literature. For instance, in Yalarnnga, case marking appears directly, in the form of suffixes, on the relevant argument nominals, as in (). ()

Yalarnnga (Northern Pama-Nyungan; Pama-Nyungan: Australia) kupi-ŋku milŋa taca-mu fish-ERG fly bite-PST ‘A fish bit a fly.’

The agent role of the nominal kupi is coded with the suffix -ŋku, while the patient role of milŋa is indicated by the absence of coding (i.e. zero coding). In the Japanese sentence in (), case marking, in the form of postpositions, appears immediately after the relevant argument nominals. ()

Japanese (isolate: Japan) Hanako ga inu o taoshi-ta Hanako NOM dog ACC knocked.down-PST ‘Hanako knocked down a dog.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

The postpositions ga and o bear information about the relationship between the verb and the two arguments: ga coding the agent role and o coding the patient role. There is further variation on case marking. In many languages, the verb may express the roles of the arguments in the clause.1 In Tzutujil, for instance, the verb bears case marking, while the argument nominals lack case marking. () Tzutujil (Mayan: Guatemala) x-Ø-kee-tij tzyaq ch’ooyaaʔ ASP-SG-PL-ate clothes rats ‘Rats ate the clothes.’ The prefixes on the verb, i.e. Ø- and kee-, in () represent the role relations between the verb and the arguments by registering the person and number properties of those arguments: Ø- ‘SG’ for tzyaq ‘clothes’ and kee- ‘PL’ for ch’ooyaaʔ ‘rats’. When () and () are compared with each other in terms of agent/ patient coding, there is little or no qualitative difference between them. The agent and patient nominals are distinctly case marked according to their roles, that is, the agent nominal coded in one way and the patient nominal coded in another way. When, however, these transitive sentences are contrasted with sentences that have only one argument, that is, intransitive sentences, an interesting difference emerges between Yalarnnga and Japanese. Compare such intransitive sentences from these languages: () Yalarnnga (Northern Pama-Nyungan; Pama-Nyungan: Australia) tjala manguru nguna-ma kulapurru-ya. this/the dog lie-PRES blanket-LOC ‘The dog is lying on the blanket.’ () Japanese (isolate: Japan) tomodati ga kaet-ta friend NOM return-PST ‘The friend returned.’ The sole argument in (), i.e. tjala manguru ‘the dog’, bears zero coding, just as the patient argument in ()—note that kulapurru-ya 1 Malchukov et al. (: ) refer to case marking on nominals as flagging and case marking on verbs as indexing. In this chapter, we will subsume flagging and indexing under case marking in order to highlight their common function of coding argument roles.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 1 I N T R O D U C T I O N

‘on the blanket’ functions as an (optional) adjunct, not as an argument; () will be grammatical without kulapurru-ya. In other words, in Yalarnnga the sole argument of an intransitive clause and the patient argument are coded identically (i.e. zero coding) but the agent argument of a transitive clause is coded differently (i.e. -ŋku). In contrast, in Japanese the sole argument of an intransitive clause (tomodati) is coded exactly the same way the agent argument (Hanako) in () is (i.e. the postposition ga), but the patient argument is coded differently with the postposition o. In other words, Yalarnnga and Japanese differ from each other in the way they code the sole argument of an intransitive clause vis-à-vis the agent and patient arguments. These and other different ways of organizing the coding of argument roles by using grammatical means are referred to technically as clause alignment or alignment for short (Plank : ). Alignment can be in the domain of case marking as well as in other grammatical domains such as passivization, relativization, and nominalization (e.g. Malchukov et al. : –). The focus of this chapter is case alignment, that is, alignment in the domain of case marking. In linguistic typology, it has become a convention to discuss case alignment by making use of notional labels S, A, and P (and also T and R, for a recipient argument and a theme argument, respectively, in the context of ditransitive clauses; see §. for discussion thereof)—note that S here should not be confused with S(ubject). A prototypical transitive event involves two participants, namely an agent and a patient, and a predicate describing an action, e.g. killing, kicking, beating (see Chapter  for prototype). Cross-linguistically speaking, ‘this [state of affairs] remains constant irrespective of the morphological or syntactic behaviour of the sentence in any individual language’ (Comrie : ). For instance, in prototypically transitive sentences (e.g. The man killed the burglar) the agent and patient arguments are both coded consistently. We are not likely to find a language in which the agent or patient argument is coded differently, depending on different action verbs (e.g. kill vs kick). Thus, there is no variation in the coding of the agent and patient arguments in a prototypical transitive sentence. We label the agent as A and the patient as P; in terms of prototypical transitive sentences, A and P are mnemonic for the agent and the patient, respectively. In contrast, there is a great deal of cross-linguistic variation in the way the arguments are coded in atypical transitive sentences, as exemplified by Japanese, as opposed to Korean. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

() Korean (isolate: Korea) ai-ka yenge-lul ihayha-nta child-NOM English-ACC understand-PRES ‘The child understands English.’ () Japanese (isolate: Japan) kodomo ni eigo ga wakar-u child DAT English NOM understand-PRES ‘The child understands English.’ In Korean, the understander is coded in the nominative case (which is also used for the agent argument), while the understandee is coded in the accusative case (which is also used for the patient argument). In Japanese, in contrast, the understander is coded in the dative case (which is also used for the recipient argument), while the understandee is coded in the nominative case (which is also used for the agent argument) (see Haspelmath b on the degree of ‘transitivity prominence’, that is, the extent to which languages make use of transitive encoding; also Tsunoda , ).2 Following Haspelmath (), therefore, we will confine our discussion to case alignment in prototypical transitive sentences, where A and P can be consistently applied to the agent and the patient, respectively—that is, where the agent and the patient receive uniform case marking across languages (see Næss : ). It must be noted that languages may choose to extend A and P coding to atypical or less transitive sentences, that is, two-argument sentences involving non-action predicates,

2 Haspelmath (b) ranks thirty-five languages in terms of the degree of transitivity prominence for eighty verb meanings, ranging from KILL to BE A HUNTER. Languages exhibiting high percentages of transitively encoded verbs include Chintang (Himalayish; Sino-Tibetan: Nepal) and Emai (Edoid; Niger-Congo: Nigeria), while Ainu (isolate: Japan), English (Germanic; Indo-European: UK), and Bezhta (Avar-Andic-Tsezic; NakhDaghestanian: Russia) appear in the lower half of the ranking table. Languages occupying the middle of the ranking table include Italian (Romance; Indo-European: Italy), Mandinka (Western Mande; Mande: Mali, Senegal, and Guinea), and Evenki (Tungusic; Altaic: Russia). In this context, it bears mentioning that Tsunoda (, ) puts forth the Transitivity Hierarchy—or what he calls the Hierarchy of Two-Place Predicates—where transitive coding patterns extend from left to right, i.e. Direct Effect on Patient > Perception > Pursuit > Knowledge > Feeling > Relation > Ability, with languages having different cutoff points on the hierarchy; contrary to predictions of the Transitivity Hierarchy, however, Wichmann’s () statistical data suggest that different valency-manipulating operations may display different, not similar or uniform, preferences with respect to the eighty verb meanings.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 1 I N T R O D U C T I O N

e.g. ‘see’, ‘hear’, ‘like’, ‘understand’ (Haspelmath : –), as illustrated by (). In such cases of extension, we may continue to use A and P, regardless of the actual argument roles in non-transitive or less typically transitive clauses (Comrie : ). In this respect, A and P are basically syntactic notions, although their prototypes have a semantic basis. The other notional label S is defined as the case marking given to the sole argument of a one-argument predicate (i.e. intransitive sentence). Haspelmath (: –) suggests that, just as A and P are used in the context of prototypical transitive sentences, not the entire class of transitive sentences, the use of S be restricted to a subclass of intransitive sentences, where there is ‘more uniformity across languages’ (i.e. ‘the subclass of uncontrolled change of state verbs like “die” . . . ’). Unfortunately, what this entails is that one particular type of case alignment, namely active–stative (see §.. for discussion of this type), will no longer be recognized as a separate alignment type, on a par with other alignment types. In this chapter, we will not adopt this suggestion, not least because of the existence of active–stative alignment, in which the case marking of the sole argument of an intransitive sentence depends crucially on whether one and the same event is, or is conceived of as being, under control of the sole argument (i.e. controlled vs uncontrolled). If the use of S is restricted to the subclass of uncontrolled change of state verbs, split-S coding in active–stative alignment—S coded like A or P—may not be captured in the overall typology of case alignments. With the three notional labels S, A, and P in place, Yalarnnga can now be characterized as aligning S with P (i.e. zero coding) to the exclusion of A (i.e. -ŋku), and Japanese as aligning S with A (i.e. ga) to the exclusion of P (i.e. o). Bickel and Nichols (: ) refer to these and other alignments of S, A, and P collectively as ‘S-alignment’, as the focus of alignment is placed on the way S aligns with either A or P. Bickel and Nichols (: ) also speak of P-alignment—actually, O-alignment, as they use O in lieu of P—when discussing the way P aligns with either R or T. In the remainder of this chapter, we will survey different types of S-alignment and P-alignment. We will also discuss different theoretical approaches to the nature of case marking, especially in the case of S-alignment. To facilitate that theoretical discussion, we will also examine the frequency distribution of different S-alignment and P-alignment types. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

11.2 S-alignment types Given the three notional labels S, A, and P, we can come up with five logically possible alignment types, as enumerated in schematic form in Figure .. We will describe these alignment types in turn. a. Nominative-Accusative Type A

S

P

b. Ergative-Absolutive Type A

S

P

c. Tripartite Type A

S

P

d. Double Oblique Type A

P

S

e. Neutral A

P

S

Figure 11.1 Five logically possible alignments of S, A, and P

11.2.1 Nominative–accusative type The nominative–accusative type is illustrated by the Japanese examples in () and (). The case marker used for A and S is known as nominative, and that used for P as accusative. Not surprisingly, the alignment type that groups A and S together to the exclusion of P (i.e. S = A 6¼ P) is known as the nominative–accusative type. This alignment is observed also in case marking appearing directly on the verb (i.e. verbal agreement), as illustrated by the following Swahili examples. ()

Swahili (Bantoid; Niger-Congo: Tanzania) a. Ahmed a-li-m-piga Badru Ahmed he-PST-him-hit Badru ‘Ahmed hit Badru.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 2 S - A L I G N M E N T T Y P E S

b. Fatuma a-li-anguka Fatuma she-PST-fall ‘Fatuma fell.’ In (a) the argument in A role is represented by the verbal prefix a-, whereas the argument in P role is signalled by the verbal prefix m-. The sole argument of an intransitive sentence, that is, S, is also indicated on the verb by the prefix a-, as in (b). In other words, A and S are coded alike, whereas P is coded differently from A and S. Thus, case marking in Swahili is the nominative–accusative type. In Latin, it is not just argument nominals but also verbs that bear case marking based on nominative–accusative alignment. () Latin (Italic; Indo-European) a. puer labora-t boy work-SG ‘The boy is working.’ b. puer magistr-um percuti-t boy teacher-ACC strike-SG ‘The boy strikes the teacher.’ c. magister puer-um percuti-t teacher boy-ACC strike-SG ‘The teacher strikes the boy.’ d. magistrī puer-um percuti-unt teachers boy-ACC strike-PL ‘The teachers strike the boy.’ The argument in S role is represented on the verb by the suffix -t in (a). There are, in fact, no other arguments present in the sentence which the verb can possibly agree with. In (b) and (c), there are two arguments, both of which are third person singular but it is A, not P, that the verb agrees with. This is evidenced by (d), wherein the verb bears a third person plural suffix in agreement with the argument in A role—the argument in P role is singular. Thus, in Latin verbal agreement, S aligns with A to the exclusion of P. Note that the same nominative–accusative alignment is at work when case marking appears on argument nominals since the argument in A or S role is zero-coded, whereas the argument in P role is coded with the accusative suffix, -um. One important thing worth mentioning about nominative–accusative alignment is that in so far as case marking on argument nominals is 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

concerned, the distribution of overt and zero case marking seems to be far from random: either (i) S, A, and P are all overtly coded, or (ii) A and S (i.e. nominative) are zero-coded (i.e. lack of overt case marking), while P (i.e. accusative) is overtly coded (i.e. non-zero case marking). What is infrequently attested is the combination of overt nominative marking and zero accusative marking. This is said to be a very strong tendency although there are a few exceptions. For instance, Comrie () reports that Middle Atlas Berber (Berber; Afro-Asiatic: Morocco), Harar Oromo (Cushitic; Afro-Asiatic: Ethiopia), Maricopa (Yuman; Hokan: US), and Murle (Surmic; Nilo-Saharan: Sudan) have overt nominative marking but zero accusative marking (for a study of such ‘marked nominative’ languages, see Handschuh ). Comrie’s -language sample contains two more ‘marked nominative’ languages, namely Igbo (Igboid; Niger-Congo: Nigeria) and Aymara (Aymaran: Chile, Bolivia, and Peru)—which may not be genuine ‘marked nominative’ languages—but it contains  nominative–accusative languages either with both overt nominative and accusative marking or with zero nominative and overt accusative marking. Note that Comrie’s sample also contains  languages that make no case marking distinctions among S, A, and P. In other words, nominative–accusative alignment alone accounts for exactly half of Comrie’s case marking languages. 11.2.2 Ergative–absolutive type Yalarnnga, exemplified in () and (), exhibits the alignment type in (b) in Figure .: ergative–absolutive. In this alignment, the case marker coding both S and P identically is referred to as absolutive, while the case marker coding A differently is called ergative (i.e. S = P 6¼ A). Not surprisingly, this alignment is known as ergative–absolutive. Further Yalarnnga examples displaying this alignment are: ()

Yalarnnga (Northern Pama-Nyungan; Pama-Nyungan: Australia) a. ŋia waka-mu I fall-PST ‘I fell.’ b. ŋat ̪u kupi wal ̪a-mu I fish kill-PST ‘I killed a fish.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 2 S - A L I G N M E N T T Y P E S

c. kupi-ŋku ŋia fish-ERG me ‘A fish bit me.’

taca-mu bite-PST

The first person singular pronoun in S or P role (a) or (c), respectively), is ŋia while it has to take a different form, i.e. ŋat̪u, when in A role (b). In other words, S aligns with P to the exclusion of A. As is the case with the nominative–accusative system, the distribution of overt and zero case marking on argument nominals in the ergative–absolutive system is thought to be far from random to the effect that it is almost always the case that the ergative is coded with an overt element, while the absolutive is zero-coded. This is a very strong tendency indeed. Two exceptions—known as ‘marked absolutive’ languages—are Nias (North-West Sumatra-Barrier Islands; Austronesian: Indonesia) and Tlapanec (Subtiaba-Tlapanec; Oto-Manguean: Mexico). In these languages, it is the ergative that is zero-coded while the absolutive is overtly coded (for a study of ‘marked absolutive’ languages, see Handschuh ). Ergative–absolutive alignment is also attested when case marking appears directly on the verb (i.e. verbal agreement). For instance, in Tzotzil the A argument is coded with a set of verbal affixes or what Mayanists call Set A, whereas the S or P argument is coded with a different set of verbal affixes or Set B. Note that in Tzotzil argument nominals themselves lack case marking. () Tzotzil (Mayan: Mexico) a. l-i-tal-otik COMPL-ABS-come-PL.INC ‘We (inclusive) came.’ b. l-i-s-pet-otik COMPL-ABS-ERG-carry-PL.INC ‘He carried us (inclusive).’ c. ʔi-j-pet-tik lok’el ti vinik-e COMPL-ERG-carry-PL.INC away the man-CLT ‘We (inclusive) carried away the man.’ d. ʔi-tal COMPL-come ‘He/she/it/they came.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

When in A role, the first person plural inclusive is coded with a Set A affix, -j- . . . -tik (c). But, when in S or P role, it bears a Set B affix, -i- . . . -otik (a and b). As for the third person singular argument, it is zero-coded for S or P role (c and d) but, when in A role, it is represented on the verb by a Set A affix, -s- (b). Though very uncommon, languages that organize case marking for both verbs and arguments on the ergative–absolutive basis do exist. A good example comes from Avar. ()

Avar (Avar-Andic-Tsezic; Nakh-Daghestanian: Azerbaijan and Russia) a. w-as w-ekér-ula M-child.NOM M-run-PRES ‘The boy runs.’ b. inssu-cca j-as j-écc-ula (M)father-ERG F-child.NOM F-praise-PRES ‘Father praises the girl.’

In (a) the argument in S role is coded with the pronominal prefix w- (masculine) on the verb, whereas in (b) the argument in P role is represented by the pronominal prefix j- (feminine) on the verb. But note that the argument in A role in (b) is not represented at all on the verb. Thus, the case marking in Avar verbs operates on the ergative–absolutive basis. As for case marking on arguments, the argument in A role in (b), inssu- ‘father’, is coded with -cca, with the argument in S or P role zero-coded. Case marking in Avar arguments too is based on ergative–absolutive alignment. 11.2.3 Tripartite type The tripartite system of (c) in Figure . is understood to be very rare. The Australian Aboriginal language Wangkumara is often cited to be an example of this uncommon alignment. ()

Wangkumara (Central Pama-Nyungan; Pama-Nyungan: Australia) a. kan̩a-ulu kalka-ŋa t ̪it ̪i-n̪an̪a man-ERG hit-PST dog-F:ACC ‘The man hit the bitch.’ b. kan̩a-ia palu-ŋa man-NOM die-PST ‘The man died.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 2 S - A L I G N M E N T T Y P E S

In (), the arguments in S, A, and P roles are coded differently from one another, i.e. a three-way distinction. Dixon (: ) mentions a group of Australian languages spoken in south-east Queensland that make such a three-way distinction for S, A, and P. In fact, Wangkumara comes from this group. Comrie () also lists Hindi (Indic; Indo-European: India), Marathi (Indic; Indo-European: India), Nez Perce (Sahaptian; Penutian: US), and Semelai (Aslian; Austro-Asiatic: Malaysia) as languages with tripartite alignment. The following two examples come from Semelai, in which the A argument is coded with the proclitic la, the P argument with the proclitic hn and the S argument zero-coded. () Semelai (Aslian; Austro-Asiatic: Malaysia) a. kʰbəs pɔdɔŋ ke die tiger that ‘The tiger died.’ b. ki=bukɒʔ la=knlək hn=pintuʔ A=open ERG=husband ACC=door ‘The husband opened the door.’ 11.2.4 Double oblique type The double oblique type (aka the accusative focus type) in (d) in Figure . is reported to be extremely rare. It may exist only as something in transition from one type to another. Certain Iranian languages, e.g. Rushan, are said to be developing a nominative–accusative system out of an earlier ergative–absolutive system. Moreover, in these languages the transitional system does not work across the board, only operating for certain types of noun phrase. Proto-Pamir, from which Rushan historically derives, had only two cases, direct and oblique. These cases were initially organized in different ways, depending on the tense in use, present or past. In present tense, they operated on the nominative–accusative basis, whereas in the past tense they were aligned on the ergative–absolutive basis. In Rushan, the oblique case, which appeared on P in the present tense, was subsequently generalized to code P in the past tense. As a consequence, A and P are now coded identically in the past, with S coded differently from A and P. This language thus evinces the alignment type in (d) in Figure .. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

Double oblique alignment is illustrated in the following two Rushan examples: ()

Rushan (Iranian; Indo-European: Afghanistan and Tajikistan) a. az=um tuyd SG.NOM=SG go-PST ‘I went.’ b. mu way wunt SG.OBL SG.OBL saw ‘I saw him.’

As the examples in () show, both the A argument and the P argument appear in the oblique case, while the nominative is used to code the S argument. 11.2.5 Neutral type The last logical possibility of (e) in Figure . or the neutral type is irrelevant to case marking per se since there are no case-marking distinctions between S, A, and P. This is, in fact, evident in the absence of case marking in English full nominals (i.e. non-pronominal), as has been discussed with respect to () and () (cf. The girl smiled). Even if overt coding is adopted in this type of system, it will not do much in terms of differentiating S, A, and P. Languages of this type may need to rely on other grammatical means of indicating ‘who is doing X to whom’, for instance, word order.

11.3 Variations on S-alignment There are alignments of S, A, and P other than the five logically possible ones enumerated in Figure .. These additional alignment types can perhaps be regarded as less straightforward than the ones that have been surveyed in . in (i) that they may employ more than one alignment type in a systematic manner (i.e. the split-ergative type), (ii) that complexity is added to one of the three argument roles (i.e. S in the active–stative type), or (iii) that it is not the alignment of the three argument roles per se but person and/or discourse salience that ultimately determine(s) the encoding of ‘who is doing X to whom’ (i.e. the hierarchical type). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 3 V A R I A T I O N S O N S - A L I G N M E N T

11.3.1 Split-ergative type In a number of languages, both the nominative–accusative and ergative–absolutive alignment systems are in active service. This type of mixed case marking system is known as the split-ergative system. Dyirbal is probably the best-known language of this type. Consider the following data from the language.3 () Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) a. balan ᶁugumbil-Ø baŋgul yaᶉa-ŋgu balga-n CLF.ABS woman-ABS CLF.ERG man-ERG hit-NFUT ‘The man is hitting the woman.’ b. bayi yaᶉa-Ø baŋgun ᶁugumbi-ᶉu balga-n CLF.ABS man-ABS CLF.ERG woman-ERGhit-NFUT ‘The woman is hitting the man.’ c. bayi yaᶉa-Ø bani-ɲu CLF.ABSman-ABS come-NFUT ‘The man is coming.’ d. ŋaᶁa ŋinuna balga-n .NOM .ACC hit-NFUT ‘I am hitting you.’ e. ŋinda ŋayguna balga-n .NOM .ACC hit-NFUT ‘You are hitting me.’ f. ŋinda bani-ɲu .NOM come-NFUT ‘You are coming.’ In (b) or (c) the argument in P or S role, respectively, is zero-coded (i.e. the zero absolutive case), whereas in (a) the argument in A role is coded with the ergative suffix -ŋgu. Thus, S aligns with P to the exclusion of A.4 In (d), however, the argument in P role is expressed by the second person pronoun form ŋinuna, as opposed to ŋinda, which is used to code the second person pronoun in A and S role, in In Dyirbal examples, ᶁ represents a lamino-palatal-alveolar stop (Dixon ); ᶉ represents a semi-retroflex continuant. 4 Note that the classifiers, which co-occur with the arguments, also operate on the ergative–absolutive basis, that is, bayi for S and P, and baŋgul for A. 3



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

(e) and (f), respectively. In other words, S aligns with A to the exclusion of P. To wit, Dyirbal relies not only upon ergative–absolutive alignment but also upon nominative–accusative alignment. What is most intriguing about the split-ergative alignment type is that the division of labour between the nominative–accusative and ergative–absolutive alignment systems is far from random but regular or systematic. Consider Dyirbal again. In this Australian language, the pronouns—specifically, the first and second person pronouns— operate on the nominative–accusative basis, with nouns (and the third person pronouns) inflecting on the ergative–absolutive basis, as can be seen from the examples in (). In fact, in most known cases of the split-ergative system the division of labour is conditioned by the referential/semantic nature or the inherent lexical content of argument nominals. In Dyirbal the dividing line exists between the first and second person pronouns on the one hand, and the third person pronouns and nouns on the other. Other languages may draw the dividing line differently. For instance, in languages such as Nhanda (Western PamaNyungan; Pama-Nyungan: Australia) the nominative–accusative system may spread on to not only personal pronouns but also personal names and kin terms. In the Australian language Mangarayi (MangarrayiMaran: Australia) the division is located between inanimates and the rest (i.e. the pronouns and all non-inanimate nouns), whereby the former category is taken care of by the ergative–absolutive system, and the latter by the nominative–accusative system. There are also languages where these two different systems overlap, giving rise to the tripartite system for some types of argument, e.g. the deictics with human reference in Yidiny (Northern Pama-Nyungan; Pama-Nyungan: Australia). Indeed, this distribution of the split-ergative system is systematic and regular to the extent that the converse situation is not thought to exist.5 For instance, no languages with the split-ergative system organize pronouns on the ergative–absolutive basis and nouns on the nominative–accusative basis. This particular observation leads to the formulation of the hierarchy in Figure .—or something akin to it. In the present book, it will be referred to as the Nominal Hierarchy, following Dixon (: ).6 5 This, however, is not entirely without exceptions. In Tibeto-Burman languages such as Hayu, Yamphu, and Belhare, the first person pronouns are reported to operate on the ergative– absolutive basis, i.e. S = P ¼ 6 A (Bickel ; also Siewierska : ). 6 Some scholars believe that there is no distinction between first and second person on the hierarchy (e.g. DeLancey ; Wierzbicka ). But there is a large amount of



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 3 V A R I A T I O N S O N S - A L I G N M E N T

st person, nd person rd person personal name/kin term human animate inanimate

Figure 11.2 The Nominal Hierarchy

The referential or semantic nature of argument nominals is not the only factor known to have a bearing on use of the split-ergative alignment system. Tense and aspect also play a role in some languages with this system. In Georgian, for instance, the ergative–absolutive system is employed in the aorist (or, broadly speaking, past) tense, and the nominative–accusative system in the present tense (see ProtoPamir in §..). Consider: () Georgian (Kartvelian: Georgia) a. st̩udent̩-i midis student-NOM goes ‘The student goes.’ b. st̩udent̩-i c̩eril-s c̩ers student-NOM letter-ACC writes ‘The student writes the letter.’ c. st̩udent̩-i mivida student-ABS went ‘The student went.’ d. st̩udent̩-ma c̩eril-i dac̩era student-ERG letter-ABS wrote ‘The student wrote the letter.’ The sentences in (a) and (b) being in the present tense, the suffix -i is used to code both S and A; the argument in P role in (b) is coded differently by the suffix -s. The aorist tense of the sentences in (c) and (d), in contrast, is responsible for use of the ergative–absolutive evidence in support of first person being higher than second person (see Dixon : –). Furthermore, there is variation on the actual form of the hierarchy in the world’s languages. For instance, in Ojibwa (Algonquian; Algic: Canada) and southern Cheyenne (Algonquian; Algic: US) second person outranks first person; in some Australian languages demonstratives (or third person) are outranked by personal names (Dixon : ). These languages are only a minority, however. The general tendency in the world’s languages is reasonably well represented in the Nominal Hierarchy in Figure ..



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

system, whereby A is coded differently from S and P (i.e. -ma vs -i). A similar situation is reported to be common in many Indo-Iranian languages and also in a number of Mayan languages. There is also a variation on tense/aspect-conditioned split-ergativity. In Newari (Mahakiranti; Sino-Tibetan: Nepal), for instance, the tense/aspect split further interacts with mood: the imperative mood in nominative– accusative and all other moods in ergative–absolutive. Similar to Newari in this respect are Sumerian (Sumerian: southern Mesopotamia) and Päri (Nilotic; Nilo-Saharan: Sudan). In some languages, split-ergativity is motivated by the difference between main and subordinate clauses. In Xokleng (Ge-Kaingan; Macro-Ge: Brazil) main clauses may be in either the ergative–absolutive or the nominative–accusative system but in subordinate clauses the case marking system is consistently organized on the ergative–absolutive basis. Jakaltek (Mayan: Guatemala) is another such language in which main clauses operate on the ergative–absolutive basis, while certain types of subordinate clause are in the nominative– accusative system. Lastly, in languages with the split-ergative system case marking may occur on both verbs and argument nominals. In such languages, the nominative–accusative system operates in case marking on the verb, and the ergative–absolutive system in case marking on argument nominals. The converse situation, however, is claimed not to exist in the world’s languages. 11.3.2 Active–stative type This is the case alignment type that goes by a host of other names in the literature, e.g. split-intransitive, active–inactive, active–static, active–neutral, stative–active, agentive–patientive, and split-S (plus fluid-S). In this type, the case marking of S depends basically on the semantic nature of the intransitive verb. For instance, if the verb refers to an activity which is likely to be under the control of S, the latter will bear the same coding as A. If not (i.e. if it refers to a state or non-controlled activity), it will be coded in the same way as P. (There may be other semantic properties that may be at work (e.g. Donohue and Wichmann ).) In other words, S is divided between A and P in terms of case marking. This alignment of S, A, and P is represented schematically in Figure .. There is some debate as to whether the active–stative system is regarded as being a hybrid of, or on a par with, 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 3 V A R I A T I O N S O N S - A L I G N M E N T

other alignment systems (e.g. ergative–absolutive) (Dixon : ; Nichols : , , ; Mithun : ; Harris : ; see also n. ). The representation of the active–stative system in Figure . does not intend to support any particular view but merely to present the alignment of S, A, and P in schematic form. A

P S

Figure 11.3 Active–stative type

There seem to be two variations on the basic criterion whereby S is split between A and P. First, the semantics of a particular instance or context of use is what really counts for the actual coding of S in some languages with the active–stative alignment system. Thus, the actual context for which each intransitive verb is used must be assessed in terms of whether the activity referred to by that verb qualifies as a controlled activity or as a state or non-controlled activity. Potentially, then, each intransitive verb has the ability of assigning A- or P-coding to S. Some verbs may always be interpreted as referring to controlled activities, and others as referring to non-controlled activities or states. But there will be a number of verbs which can denote either controlled activities or non-controlled activities or states. For these verbs S may be coded either as A or as P. This type of active–stative marking is called the fluid-S system. Bats is a language with the fluidS system. () Bats (North Caucasian; Caucasian: Georgia) a. tχo naizdraχ qitra we-ABS to-the-ground fell ‘We fell to the ground (unintentionally, not our fault).’ b. atχo naizdraχ qitra we-ERG to-the-ground fell ‘We fell to the ground (intentionally, through our own carelessness).’ Other languages with the fluid-S system include: (Spoken) Tibetan (Bodic; Sino-Tibetan: China), Tonkawa (isolate: North America), Eastern Pomo (Pomoan; Hokan: US), and other Pomoan languages. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

In the other type of active–stative alignment system, intransitive verbs are categorized more or less consistently into two groups: those which code S in the same way as A, and those which code S in the same way as P. Thus, each intransitive verb belongs strictly to either of the two categories: those which assign A-coding to S, and those which assign P-coding to S. For this reason, this type of active–stative alignment system is referred to as the split-S system. Among the languages with the split-S system are Cocho (Popolocan; Oto-Manguean: Mexico), Dakota (Siouan: US), Guaraní (Tupi-Guaraní; Tupian: Paraguay), Iowa-Oto (Siouan: US), Ket (Yeniseian: Russia), Laz (Kartvelian: Georgia and Turkey), and Onondaga (Northern Iroquoian; Iroquoian: US). The split-S system is exemplified here by Laz. ()

Laz (Kartvelian: Georgia and Turkey) a. bere-k imgars child-ERG SG.cry ‘The child cries.’ b. bere-Ø oxori-s doskidu child-NOM house-DAT SG.stay ‘The child stayed in the house.’ c. baba-k mečaps skiri-s cxeni-Ø father-ERG SG.give.SG.SG child-DAT horse-NOM ‘The father gives a horse to his child.’

The argument in S role in (a) is coded differently from the argument in S role in (b). The former is coded in the same way as the argument in A role in (c), whereas the latter is zero-coded just as the argument in P role in (c). There are said to be far more split-S languages than fluid-S languages. Though the examples given so far of the active–stative alignment system all have case marking on argument nominals, it is far more frequently verbs that host active–stative case marking. For instance, Nichols () and Siewierska () both find that the active–stative marking system is far more likely to be found on verbs than on argument nominals (see §.). Dixon (: ) puts forward an interesting explanation as to why this may be the case. The split of S in the active– stative alignment system is conditioned by the semantic nature of the verb. This makes the verb a more compatible host for the active–stative marking than the argument nominal (for an earlier similar view, see Mithun : ). Thus, Guaraní and Eastern Pomo, exemplified below, 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 3 V A R I A T I O N S O N S - A L I G N M E N T

are more representative cross-linguistically of active–stative alignment than Bats and Laz are. () Guaraní (Tupi-Guaraní; Tupian: Paraguay) a. a-ma.apo SG.A-work ‘I work.’ b. še-manuʔa SG.P-remember ‘I remember.’ c. a-i-pete SG.A--hit ‘I hit him.’ d. še-pete SG.P-hit ‘He hit me.’ In (c) the argument in A role is represented by the prefix a- on the verb. Similarly, the intransitive verb in (a), describing a physical action, is coded with the same prefix a- for the argument in S role. In (b), however, the intransitive verb, describing a cognitive process, is coded with a different prefix še- for the argument in S role. This prefix is also used to code the argument in P role in (d). Thus, the coding of S is split between A and P, depending on the semantics of the verb. Eastern Pomo exhibits the same active–stative alignment on the verb. The first person pronoun must make a choice between A- and P-coding (i.e. há˙ and wí, respectively), depending on whether the action described by the verb is controlled (e.g. going) or uncontrolled (e.g. slipping). () Eastern Pomo (Pomoan; Hokan: US) a. há˙ mí˙pal š˙aˑk’a SG.A SG.M.P killed ‘I killed him.’ wá-du˙kìya b. há˙ SG.A going ‘I’m going.’ c. wí c’e·xélka SG.P slipping ‘I’m slipping.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

11.3.3 Hierarchical type The last type of case alignment to be surveyed in the present section is what is known as the hierarchical alignment system (aka the directinverse system). This is not common in the world’s languages but is prevalent in the Algonquian languages, and also attested in Australian and Tibeto-Burman languages. The way that this case alignment system operates is conditioned strictly by the Nominal Hierarchy in Figure .. Basically, when the higher being (i.e. higher on the Nominal Hierarchy) acts or impinges on the lower being (i.e. lower on the Nominal Hierarchy), one set of verbal coding—called direct—is used. But if, on the other hand, the lower being acts on the higher being, a different set of verbal coding—called inverse—must be employed. Plains Cree is often cited as a hierarchical-alignment language. ()

Plains Cree (Algonquian; Algic: Canada) a. ki-tasam-i-n -feed-DR- ‘You feed me.’ b. ki-tasam-iti-n -feed-INV- ‘I feed you.’

Note that in both (a) and (b)—regardless of their argument roles— the second person pronominal element appears on the verb as a prefix, while the first person element is represented on the verb as a suffix. Which of the two, the first person or the second person, is A or P is indicated by the presence of the direct or inverse suffix. Note that in Plains Cree second person outranks first person in terms of ‘animacy’. In (a) a direct suffix is used on the verb, thereby identifying the situation described by (a) as one wherein the higher being acts on the lower being (i.e. second person ! first person, where ! indicates the direction of action). In (b), the situation is reversed, that is, first person ! second person; accordingly, an inverse suffix is selected to signal this change in the direction of action. Plains Cree also makes a finer distinction between two different types of third person, namely proximate and obviative. The choice between these two depends on a number of factors, one of which is topicality or discourse salience. Compare: 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 4 D I S T R I B U T I O N O F T H E S - A L I G N M E N T T Y P E S

() Plains Cree (Algonquian; Algic: Canada) a. asam-ē-w napew-Ø atim-wa feed-DR- man-PROX dog-OBV ‘The man feeds the dog.’ b. asam-ekw-w napew-wa atim-Ø feed-INV- man-OBV dog-PROX ‘The man feeds the dog.’ In (a) the nominal napew is the argument that is established as given or topical, while the other nominal atim is neither given nor topical. The former is in A role, whereby the direct suffix is selected for the verb. The situation is reversed in terms of topicality in (b), triggering use of the inverse suffix for the verb. It is interesting to note that the proximate or more topical nominal is zero-coded, while the obviative or less topical nominal is signalled by non-zero coding.7 11.4 Distribution of the S-alignment types Nichols (), Siewierska (), and Comrie () provide statistical data on the distribution of the case alignment systems, albeit on a smaller scale than some of the word order studies discussed in Chapter .8 Based on a sample of  languages, Nichols (: ) sets out to determine the frequencies of the case alignment types across three different grammatical categories, namely pronouns, nouns, and verbs.9 Her frequency results are summarized in Table .. 7

In typical intransitive sentences, needless to say, the issue of who impinges on whom does not arise, as illustrated in (i): (i) Plains Cree (Algonquian; Algic: Canada) ēyāpic nīsta ni=ka-ācimo=n in.due.course .EMPH PV=FUT-narrate=.IND ‘In due course, I too will narrate.’ 8 Both Nichols () and Siewierska () include neutral alignment but in the present discussion it will be ignored for the reason explained earlier. 9 In Nichols’s (: ) table of the frequencies of the case alignments, the sample languages are also counted in terms of dominant alignment patterns. By dominant alignment is meant the pattern that is found in the majority of syntactic categories, or the pattern that is the sole non-neutral type, the nominal rather than pronominal pattern, or the most semantic of the patterns (for further discussion, see §.). At any rate, in terms of dominant alignment patterns the same frequencies of the case alignments are observed, that is, the nominative–accusative being the most frequent, followed by the ergative– absolutive. The tripartite is again the most infrequent type.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

Table 11.1 Frequencies of case alignment types in Nichols () Pronoun

Noun

Verb

Total

Nominative–accusative









Ergative–absolutive









Active–stative









Hierarchical









Tripartite









It is clear from Table . that the nominative–accusative type is the most frequently employed case alignment type irrespective of the locus of case marking, i.e. pronouns, nouns, or verbs. The ergative–absolutive type is overall the second most common alignment type, albeit not when case marking on the verb is concerned. The tripartite system is overall the most infrequent alignment type. The other two case alignment systems, active–stative and hierarchical, are confined to verbs. Siewierska () is another detailed study of the frequencies of the case alignment systems. Her work is based on a sample of  languages, set up according to the sampling technique of Rijkhoff et al. () (see §.. for discussion of this sampling technique). Siewierska’s (: ) results generally confirm Nichols’s, as can readily be seen in Table ..10 Table 11.2 Frequencies of case alignment types in Siewierska () Pronoun

Noun

Verb

Total

Nominative–accusative









Ergative–absolutive









Active–stative









Hierarchical









Tripartite









10 As a matter of fact, in Siewierska (: ) the internal splits in alignment among pronouns, nouns, and verbs are also considered. Thus, a variety of mixed systems are found to be in use for each of these three categories, e.g. nominative–accusative/ergative– absolutive, nominative–accusative/active–stative. However, these internal splits have been left out of Table . primarily because of their infrequency.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 4 D I S T R I B U T I O N O F T H E S - A L I G N M E N T T Y P E S

The most frequent also in Table . is the nominative–accusative type, followed by the ergative–absolutive type. The hierarchical type is the most infrequent in the aggregate (but cf. Table .). Note that, although it is far more likely to be found on verbs, the active–stative type is also attested on pronouns at least in one instance, while in Nichols’s () data it is only used in conjunction with verbs. Also worth noting in Tables . and . is the fact that pronouns, nouns, and verbs do not pattern likewise with respect to the alignment systems (Siewierska : ). This is due to the fact that not all languages employ a single case alignment type across the board, like Avar or Latin, for instance. In fact, Siewierska (: ) points out that consistent use of a single case alignment type for pronouns, nouns, and verbs may be the exception rather than the norm in the world’s languages. In Siewierska’s sample, the number of languages exhibiting a single case alignment type for all the three syntactic categories is —  for the nominative–accusative,  for the ergative–absolutive, and  for the active–stative. With the neutral system taken into account, the number of languages like Avar or Latin goes down to . More recently, Comrie () provides similar frequency data, albeit dealing with alignment types attested in nouns and pronouns only. His survey includes  languages for nouns and  languages for pronouns. His frequency results are presented in Table .. (Note that the total numbers of languages in the table do not add up to  or  because of the presence in the samples of languages with the neutral type.) Table 11.3 Frequencies of case alignment types in Comrie () Pronoun

Noun

Total

Nominative–accusative







Ergative–absolutive







Active–stative







Tripartite







As in the case of Nichols’s and Siewierska’s study, the nominative– accusative type comes out as the most frequent, followed by the ergative–absolutive as the second most frequent. However, the active– stative and tripartite types have an equal number of exemplifying languages. The hierarchical system is not included in Comrie’s survey. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

11.5 Explaining various case alignment types Now that we have surveyed the various alignment types and their frequencies, we will try to make sense of them, beginning with four of the five possible alignments of S, A, and P, as presented in Figure ., in the world’s languages: nominative–accusative, ergative–absolutive, tripartite, and double oblique. (We will ignore neutral alignment since it involves no case-marking distinctions.) Discussion of the other ‘less straightforward’ case alignment types, i.e. split-ergative, active–stative, and hierarchical, is deferred to §. and §.. As we have seen in §., the nominative–accusative type is no doubt the most widely attested alignment. The ergative–absolutive type is the second most common one. The tripartite type is extremely infrequent relative to the nominative–accusative and the ergative–absolutive type, while the double oblique type is not represented at all in Nichols (), Siewierska (), or Comrie (). The question that immediately arises is: why are these four different alignment types systems distributed the way they are in the world’s languages? To answer this question, we first need to ask another question: what is the function of case marking? There are two major approaches to answering the second question: (i) the discriminatory view and (ii) the indexing view. 11.5.1 The discriminatory view of case marking One prominent view is that the function of case marking is to distinguish ‘who’ from ‘whom’ in ‘who is doing X to whom’. In other words, case marking is used primarily to distinguish A from P. There is no need to distinguish S from either A or P because S occurs alone in the intransitive sentence. On the other hand, A and P co-occur in the transitive sentence. Hence the need to distinguish them. This is known as the discriminatory view of case marking, associated largely with Comrie (, ) and Dixon (, ) (cf. Næss ). Under this view, then, both nominative–accusative and ergative–absolutive alignment make functional sense because they both distinguish A from P, while aligning S with either A or P. If S is treated in the same way as A, it is the nominative–accusative. If, on the other hand, S is treated in the same way as P, it is the ergative–absolutive. This view may also explain why the tripartite system is rarely found in the 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 5 E X P L A I N I N G V A R I O U S C A S E A L I G N M E N T T Y P E S

world’s languages when compared with the nominative–accusative and ergative–absolutive. This system is simply overly functional or uneconomical because S does not need to be distinguished from the other two, and yet S bears case marking distinct not only from A but also from P. The double oblique system, in contrast, is completely dysfunctional in that, although they co-occur in the same sentence type, A and P are not coded differently from each other when they are distinguished from S, which they never co-occur with. This may perhaps explain why this system is extremely rare, and also why it perhaps exists only as a transitional system or as a kind of historical accident, as it were. Proponents of the discriminatory view also point to the fact that most frequently the nominative is realized by zero (or at least a zero allomorph), whereas the accusative has a non-zero realization, and that it is almost always the case that the ergative is coded with a non-zero element, whereas the absolutive is zero-coded (§.. and §..). This is, in fact, captured in Greenberg’s (b: ) Universal : where there is a case system, the only case which ever has only zero allomorphs is the one which includes among its meanings that of the subject of the intransitive verb (i.e. S). This makes sense if, as proponents of the discriminatory view claim, it is S, not A or P, that does not need to be distinguished from the other argument roles by the very virtue of occurring alone in the intransitive sentence. This motivates S to receive zero coding. In the nominative–accusative system S aligns with A, whereby the latter also receives zero coding; in order to distinguish itself from zero-coded A, P will have to be coded with a non-zero element. In the ergative–absolutive system it is P that S aligns with, whereby the former is also zero-coded. This leaves A to have a non-zero realization in order to distinguish itself from the zero-coded P (see Bickel b:  on the possibility of areal influence, genealogical stability, individual etymologies, or paradigm structures giving rise to this distribution). The discriminatory view thus explains well the distribution of the alignment types in Figure .. There are, however, a few issues that it does not seem to be able to handle, at least not as well as the distribution of the four case alignment types. These issues relate to the other case alignment types: active–stative, hierarchical, and split-ergative. A brief discussion of the problems associated with the first two types is presented below, with discussion of the problem associated with the last type deferred to §.. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

First, in the active–stative system some Ss are coded like As, and other Ss like Ps. This is singularly problematic for the discriminatory view, which claims that S does not need to be distinguished from either A or P.11 Clearly, the function of this distinction in S is not to distinguish argument nominals because S is the sole argument in the intransitive sentence. The fact that S is sometimes coded in the same way as A and sometimes as P may suggest that some Ss behave semantically as As, and other Ss as Ps. Indeed, as Mithun’s () study reveals, in active–stative languages S is characterized as representing either ‘the participant who performs, effects, instigates, or controls the situation denoted by the predicate’ (i.e. A), or ‘the participant who does not perform, initiate, or control any situation but rather is affected by it in some way’ (i.e. P). It thus seems that, in so far as the active–stative system is concerned, the function of case marking is that of characterization (see §.. for further discussion), not of discrimination. Nonetheless, proponents of the discriminatory view may be quick to point to the low frequency of the active–stative system itself being strong evidence in support of their position. Second, the hierarchical system does not seem to involve the function of distinguishing A from P at all. This system instead registers the relative positions of A and P on the Nominal Hierarchy. In other words, the whole situation involving both A and P—which, A or P, is ranked higher on the Nominal Hierarchy?—is taken to be the basis for the choice between direct or inverse coding. The hierarchical system thus also poses a problem for the discriminatory view of case marking. But then proponents of this view can again argue that the problem itself may perhaps explain why there are only a small number of languages with this particular case alignment system.

11 Dixon (: ) entertains the possibility of treating active–stative alignment as a hybrid of nominative–accusative and ergative–absolutive alignment (for a similar view, see Nichols : , , , who also points out that most active–stative languages seem to have a nominative–accusative base or slant in that most intransitive subjects are formally identical to transitive subjects). Analysed in this way, the active–stative system may be taken account of by the discriminatory view. Mithun (: ), however, is firmly of the opinion that active–stative alignment is a coherent, semantically motivated grammatical system in itself. Indeed active–stative alignment is very different from the other two alignments in that, as Harris (: ) points out, in active–stative alignment some Ss are treated differently from others, while in both nominative–accusative and ergative– absolutive alignment S is treated uniformly.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 5 E X P L A I N I N G V A R I O U S C A S E A L I G N M E N T T Y P E S

11.5.2 The indexing view of case marking Based on the observation that confusability at the clausal level between A and P can readily be tolerated in many languages, Hopper and Thompson (: ) point out that the discriminatory function of case marking has been overstated to the exclusion of the indexing function of case marking (see Næss ). To put it differently, the function of the case marking of the argument nominal in A role is to index the A-hood of that nominal, whereas the function of the case marking of the argument nominal in P role is to index the P-hood of that nominal. This view can immediately explain the existence of the active–stative system without much difficulty, for instance: S is sometimes coded in the same way as A and at other times as P because S is similar to A in some ways and to P in other ways. Hopper and Thompson’s argument is based on the notion of transitivity, which ‘is traditionally understood as a global property of an entire clause, such that an activity is “carried-over” or “transferred” from an agent to a patient’ (Hopper and Thompson : ). They identify a number of parameters of transitivity including participants, kinesis, volitionality, agency, affectedness of P, and individuation of P. Each of these parameters constitutes a scale along which clauses can be ranked or compared. For instance, a clause with two participants is more transitive than a clause with a single participant because no transfer of an activity can occur unless there are at least two participants involved in the situation. Also take affectedness of P: a clause with a completely affected P is more transitive than a clause with a partially affected P. The more features a sentence has in the high ends of the parameters (e.g. two or more participants, a completely affected P), the more transitive it is. Conversely, the more features a sentence has in the low ends of the parameters (e.g. one participant, a partially affected P), the less transitive it is. Transitivity can be seen to be a cluster of these parameters (for further discussion, see §.). Thus, a natural consequence of this conception of transitivity is that some Ps are more P-like than other Ps. The parameters that pertain most to the present discussion of the indexing view of case marking are: individuation of P, affectedness of P, aspect, and affirmation. For instance, the distinction between a totally affected P and a partially affected P is reflected in P-coding. This phenomenon—known as differential object marking in the 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

literature—is very commonly found in languages of eastern and northeastern Europe, e.g. Latvian (Baltic; Indo-European: Latvia), Lithuanian (Baltic; Indo-European: Lithuania), Polish (Slavic; Indo-European: Poland), Russian (Slavic; Indo-European: Russia), Finnish (Finnic; Uralic: Finland), Estonian (Finnic; Indo-European: Estonia), and Hungarian. In Hungarian it is reflected in the alternation between the accusative case (for a totally affected P) and the partitive case (for a partially affected P). ()

Hungarian (Ugric; Uralic: Hungary) a. olvasta a könyvet read.s/he.it the book-ACC ‘He read the (whole) book.’ b. olvasott a könyvböl read.s/he the book-PRTV ‘He read some of the book.’

Hopper and Thompson’s indexing view of case marking also extends to A-coding. That is, the function of A-coding is to index A-hood. In languages of Europe, the Caucasus, South Asia, and North America, when the predicate of experience or cognition—no action—is involved, what is expressed as the subject in English is marked by the so-called dative subject—treated under the rubric of differential subject marking. ()

Georgian (Kartvelian: Georgia) Gelas uqvars Nino-Ø Gela-DAT love.he.her Nino-NOM ‘Gela loves Nino.’

()

Spanish (Romance; Indo-European: Spain) Me gusta la comida china .DAT like the food Chinese ‘I like Chinese food.’

()

Malayalam (Southern Dravidian; Dravidian: India) enikku raamane ariññilla -DAT Raman-ACC knew-not ‘I didn’t know Raman.’

The semantic nature of the predicate of experience or cognition does not involve the prototypical transferral of an activity from an agent to a patient. Rather, it denotes a certain physical, emotional, or cognitive condition or state of the participant. As such, the clause built on such a 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 5 E X P L A I N I N G V A R I O U S C A S E A L I G N M E N T T Y P E S

verb is not characterized as transitive (enough). This is reflected in the case marking of the experiencer or cognizer in languages such as Georgian, Spanish, and Malayalam. It is thus not marked in the same way as the prototypical A is. 11.5.3 The discriminatory view vs the indexing view The indexing view can better explain the active–stative system and the case marking conditioned primarily by the features of transitivity such as affectedness of P and kinesis (action vs no action) than does the discriminatory view. The discriminatory view of case marking, on the other hand, can take better account of the rarity of, for instance, the double oblique system and the tripartite system. Recall that Hopper and Thompson’s (: ) observation that confusability at the clausal level between A and P is tolerated in many languages. If so, it rather comes as a surprise that the double oblique system is not more frequent in the world’s languages than it actually is (e.g. due to randomness). It only exists as a transitional system in few languages, however. Moreover, it is also a bit of conundrum why the tripartite system is not as common as the ergative–absolutive system, if not the nominative–accusative system. Though S is similar to A in some ways and to P in other ways, by virtue of appearing alone in the intransitive sentence S should at least be indexed differently from A and P—both of which occur in the transitive sentence. Last but not least, it is also the discriminatory view, not the indexing view, that makes much sense of Greenberg’s (b) Universal , mentioned in §..: in a case system the only case which ever has a zero realization is S. This leads to the conclusion that, rather than being competing views of case marking, both the discriminatory and the indexing view complement each other in the overall understanding of case marking. Recently, Næss () makes an attempt to unify these two views of case marking, arguing that case marking has both discriminatory and indexing functions. In particular, she takes a prototype approach to transitivity, appealing to extensions of the discriminatory and indexing functions in order to account for case marking patterns in non-prototypical transitive sentences. There are still two other case marking systems that remain to be taken account of: hierarchical and split-ergative. As has already been pointed out, the former operates on a different basis from the other case 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

alignments. The function of this system does not seem to be either to distinguish A from P or to index S, A, and P, but rather to indicate whether the action is transferred from a higher being to a lower being or vice versa. Thus, this system has a totally different conceptual basis. The split-ergative system, on the other hand, can perhaps be taken to be a mixture of nominative–accusative alignment and ergative–absolutive alignment. Interpreted in this way, this system as a whole can be explained by the discriminatory view. But what it fails to explain is the fact that the nominative–accusative system and the ergative– absolutive system work from the top and the bottom of the Nominal Hierarchy (Figure .), respectively, not the other way around. Though it may take account of the two systems individually, the indexing view may also find it difficult to explain why the distribution of nominative–accusative and ergative–absolutive alignment within the split-ergative system is the way it is. For one thing, it is not entirely clear why the nominative-coded A should in terms of A-hood be different from the ergative-marked A, and, if so, how—also why the accusative-coded P should in terms of P-hood be different from the absolutive-coded P.

11.6 The Nominal Hierarchy and the split-ergative system Silverstein () is the first to address the nature of the split-ergative system with a view to providing an explanation of the distribution of nominative–accusative and ergative–absolutive alignment within the split-ergative system. His theory is based largely on the inherent lexical/ referential content or the semantic nature of nominals being the conditioning factor for the choice between the two alignments (also see Dixon , ). The inherent lexical content of nominals relates to agency (or agentivity), which Silverstein organizes into a hierarchy of binary features. This hierarchy is later revised into a single multi-valued one by Dixon () as represented in somewhat different form in Figure .. Silverstein’s hierarchy is often referred to alternatively as the Agency Hierarchy (e.g. Dixon ), the Animacy Hierarchy (e.g. Comrie , ), the Person/Animacy Hierarchy (e.g. Blake ), or, rather neutrally, the Nominal Hierarchy (e.g. Dixon ). For reasons to be explained shortly, the Nominal Hierarchy is chosen in the present book to refer to Silverstein’s hierarchy. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 6 T H E N O M I N A L H I E R A R C H Y A N D T H E S P L I T - E R G A T I V E S Y S T E M

Silverstein (: ) claims that the Nominal Hierarchy represents ‘the semantic naturalness for a lexically-specified NP to function as agent of a true transitive verb, and inversely the naturalness of functioning as patient of such’. In other words, some nominals are inherently more likely to be As than Ps, whereas other nominals are inherently more likely to be Ps than As. In many situations, for instance, humans are indeed more likely to be As than Ps, while inanimate entities are more likely to be Ps than As. A much stronger restatement of this claim may be that it is ‘natural’ for a higher being to impinge on a lower being but it is ‘unnatural’ for a lower being to impinge on a higher being. If so, nominative–accusative alignment and ergative–absolutive alignment work continuously from the top and the bottom of the Nominal Hierarchy, respectively, precisely because in this way nominals will be zero-coded when they are in their natural functions, and overtly coded when in their unnatural roles. Thus, human nominals will be zero-coded when in A role but overtly coded when in P role; conversely, inanimate nominals will be zero-coded when in P role but overtly coded in A role. This indeed is a very functional explanation. To give a brief mundane analogy, consider the function of the stop light on the back of a vehicle, based on a similar principle. The natural function of a vehicle is to move from point A to point B. Thus, when the vehicle is moving or in its natural role, the stop light is off (or it is zero-coded). But, as the vehicle comes to a halt or comes into its unnatural role, the stop light is activated (or it is overtly coded). If the vehicle does not work this way, it will be taken off the road by the police! As the reader recalls from §.. and §.., the nominative case is most frequently realized by zero (or at least a zero allomorph), whereas the accusative case has a non-zero realization, and also that it is almost always the case that the ergative case is coded with a non-zero element, whereas the absolutive case is coded with a zero element (or at least a zero allomorph). In conjunction with this observation it becomes very clear why nominative–accusative alignment covers a continuous segment from the top of the Nominal Hierarchy, and ergative–absolutive alignment from the bottom of the Nominal Hierarchy, and never the other around. This is because in this way nominals higher on the hierarchy will be zero-coded when in A role. But, when they are in P role, they will be coded with the non-zero accusative case. Conversely, nominals lower on the hierarchy will be zero-coded when in P role. But, when they are in A role, they will be coded with the non-zero ergative case. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

To see how this principle works in an actual language with the splitergative system, consider Dyirbal (Dixon ). The core case marking of this Australian language is presented in Table .. Table 11.4 The core case marking of Dyirbal based on Dixon () Nominative–accusative

Ergative–absolutive

A



-ŋgu

S





P

-na



For Dyirbal, the point where nominative–accusative and ergative– absolutive alignment meet in the hierarchy is located between the first and second person pronouns, and the rest, including the third person pronouns. ()

Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) a. ɲurra-Ø bayi yaᶉa-Ø balga-n you.all-NOM CLF.ABS man-ABS hit-NFUT ‘you all hit the man’ b. ɲurra-na baŋgul yaᶉa-ŋgu balga-n you.all-ACC CLF.ERG man-ERG hit-NFUT ‘The man hit you all.’

In terms of the Nominal Hierarchy, the referent of the second person plural pronoun is much higher than that of the common noun. Thus in (a), wherein the referent of the second person plural pronoun acted on that of the common noun, both nominals are zero-coded or they do not receive any overt coding. In (b), on the other hand, the situation is reversed, whereby the two nominals in question are coded with nonzero suffixes. The situation denoted in (a) has two participants in their natural roles, whereas in (b) these participants occur in their unnatural roles. This difference is reflected in the choice between nominative–accusative and ergative–absolutive alignment, and ultimately in the difference between zero and non-zero coding. The foregoing explanation of the split-ergative system is based crucially on the Nominal Hierarchy being interpreted in terms of agency and animacy: e.g. a human (or more animate) entity is likely to be more agentive than an inanimate entity. Upon closer inspection, however, the parameters of agency and animacy clearly cannot account for the whole spectrum of the hierarchy. For instance, though humans (or 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 6 T H E N O M I N A L H I E R A R C H Y A N D T H E S P L I T - E R G A T I V E S Y S T E M

non-human animate entities) are more agentive than inanimate entities, it is implausible to suggest that first person is somehow more agentive than third person or other humans. The same comment can be made of animacy. It simply does not make much sense to claim that first person is more animate than other human beings and so on. They are all equally animate and human.12 For these reasons, the neutral label of the Nominal Hierarchy has been selected in the present book to refer to Silverstein’s hierarchy. In fact, if the Nominal Hierarchy were only based on the parameters of agency and animacy, one would expect the split ‘to occur more commonly between human and non-human, or between animate and inanimate NPs’ as is correctly pointed out by DeLancey (: ). But, while the split can potentially occur at any point in the hierarchy, the general tendency is actually for first and second person pronouns to opt out of ergative–absolutive alignment in favour of nominative–accusative alignment, thereby giving rise to the split between first and second person pronouns on the one hand, and the rest on the other. As a result, third person pronouns tend to receive ergative case marking much more often than do first and second person pronouns. Also widespread is the split between pronouns and full nominals but other splits are reported to be rare occurrences.13 Why is it, then, that first and second personal pronouns tend to outrank the other nouns and/or third person pronouns on the hierarchy? Before attempting to answer this question in the next section, we need to consider briefly one alternative explanation of the distribution of nominative–accusative and ergative–absolutive alignment within the split-ergative system. The reader will recall from §. that Haspelmath (; also ) has argued that the concept of markedness is superfluous and should be abandoned in favour of frequency or, accurately, frequency of use, as the basis for the range of phenomena identified and investigated under the purview of markedness. In particular, he invokes the Zipfian idea of economy: ‘[t]he more predictable the sign is, the 12 In this context, note that the Nominal Hierarchy is referred to as the Agency Hierarchy, the Animacy Hierarchy, or the Person/Animacy Hierarchy. These labels obviously are attempts at highlighting different parameters prominent in the hierarchy. Croft (: ) also points out that the component of definiteness must also be recognized as being part and parcel of the Nominal Hierarchy because pronouns by definition are inherently definite as opposed to common nouns (covering human, animate, and inanimate on the hierarchy). Thus, to call the Nominal Hierarchy by one of these names really does not capture what is hidden behind the hierarchy. 13 There do not seem to be any statistical data to substantiate these observations.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

shorter it is.’ Since predictability is entailed by frequency (i.e. X is predictable because it occurs frequently enough), Haspelmath goes on to infer that ‘[t]he more frequent the sign is, the shorter it is’. Thus, if human nominals are natural As and unnatural Ps, they will occur more frequently as As than as Ps. This means, in Haspelmath’s view, that the case marking for human As will be ‘shorter’ than the case marking for human Ps. Conversely, the case marking for human Ps will be ‘longer’ than the case marking for humans As. Moreover, inanimate nominals are natural Ps and unnatural As. This means, in Haspelmath’s view, that the case marking for inanimate Ps will be ‘shorter’ than the case marking for inanimate As. Conversely, the case marking for inanimate As will be ‘longer’ than the case marking for inanimate Ps. Since zero coding is shorter than overt coding—conversely overt coding is longer than zero coding—this way of thinking can equally well explain the distribution of nominative–accusative and ergative–absolutive alignment within the split-ergative system, exemplified by the Dyirbal data in Table .. That said, there are also languages that code all these four cases in question overtly. Thus, it would be interesting to find out whether the nominative and the absolutive are ‘shorter’ (e.g. fewer phonemes or syllables) than the accusative and the ergative, respectively, in such languages as well. Unfortunately, there seems to be no systematic investigation carried out along these lines. Lastly, while frequency of use can be a useful concept in explaining the nature of the split-ergative system, we also need to be mindful of frequency being merely a symptom of motivating or causal factors (e.g. Greenberg  []: ; Croft : , –; see also §.).

11.7 Case marking as an interaction between attention flow and viewpoint As pointed out in §., although it can potentially occur at any point in the Nominal Hierarchy, the strong tendency in split-ergative languages is for the split to be made between first and second person pronouns on the one hand and the rest on the other—that is, with first and second person pronouns opting out of ergative–absolutive alignment in favour of nominative–accusative alignment. Also widespread is the split between pronouns and full NPs but other splits are reported to be rare occurrences. How do we make sense of all this? 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 7 C A S E M A R K I N G A S A N I N T E R A C T I O N

Mallinson and Blake (: ) argue that the reason why first and second person pronouns opt out of ergative–absolutive alignment is that these pronouns encode the speech act participants, i.e. the speaker and the hearer. The speech act participants are more topical than other nominals, since they are ‘more interesting to talk about than other people [or other things]’ (Wierzbicka : ). Thus, the Nominal Hierarchy ‘represents a relative centre of interest’ to the effect that ‘events tend to be seen from the point of view of the speech act participants’ (Blake : ). This interpretation can also be applied to entities further down the hierarchy: that is, human is more topical than animate, and animate, in turn, more topical than inanimate because humans have less and less in common with nonhuman entities as we move down the hierarchy. To wit, the Nominal Hierarchy can now be conceived of as a hierarchy of topicality. Proponents of this topicality-based idea are also of the opinion that, the speaker and hearer being inherently topical, their viewpoint is the natural viewpoint from which events are described. The speech act participants’ viewpoint being natural, first and second person pronouns, in turn, do not receive ergative coding (i.e. they receive nominative coding, which tends to be zero). Third person pronouns and other nominals, on the other hand, do attract ergative coding because non-speech act participants’ viewpoint is not natural. If neither of the speech act participants is involved in the event depicted, then it is more natural to describe the situation from the viewpoint of human beings involved in that situation—humans close and/or known to the speech act participants in preference to those who are not—than from the viewpoint of non-human animate or inanimate entities involved because the speech act participants relate to, or empathize with, human beings better than with animals or inanimates. This explains why in some languages the split may occur much lower than first and second person pronouns. In Nhanda (Western Pama-Nyungan; Pama-Nyungan: Australia), for instance, not only pronouns but also personal names and kin terms opt out of ergative–absolutive alignment in favour of nominative–accusative alignment. The foregoing discussion suggests that the Nominal Hierarchy can now be conceptualized as a set of concentric circles of what DeLancey (: ) calls ‘egocentrism’, as presented in Figure .. As we move outwards from the centre of the circles (i.e. first and second person), the speech act participants’ empathy with 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

other entities on the hierarchy decreases. Conversely, as we move inwards to the centre of the circles, the speech act participants’ empathy with others on the hierarchy increases. inanimate animate human personal names kin terms  ,

Figure 11.4 The concentric circles of egocentrism

It is DeLancey () who develops this topicality-based interpretation with a view to putting forth a unified explanation of not only the Nominal-Hierarchy-based split-ergative system but also other case alignment systems. He introduces two fundamentally psychological notions: attention flow and viewpoint. The order of nominal constituents in a clause reflects attention flow, which is the order in which the speaker expects the hearer to attend to them. Events have an inherent natural attention flow, which is the flow of attention in witnessing how events actually unfold spatially and/or temporally. In a typical transitive situation, the natural attention flow is from the agent to the patient. Moreover, events can be seen or reported from a number of possible viewpoints. The natural viewpoint, however, is that of the speech act participants as has already been explained in relation to the topicalitybased interpretation of the Nominal Hierarchy: the speech act participants are located at the deictic centre of the speech act, thereby constituting natural viewpoint loci. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 7 C A S E M A R K I N G A S A N I N T E R A C T I O N

Attention flow and viewpoint sometimes coincide, while at other times they may not do so. According to DeLancey (), the splitergative system represents a resolution of conflicts between the natural viewpoint and attention-flow assignments. If attention flow agrees with viewpoint or vice versa, what is being described is a natural event in the universe of discourse. If, on the other hand, they do not agree with each other, what is being described is not a natural event in the universe of discourse. The difference between these two scenarios, in turn, is reflected directly in the case marking of the participants in the event. Consider the examples from Dyirbal again, repeated here. () Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) a. ɲurra-Ø bayi yaᶉa-Ø balga-n you.all-NOM CLF.ABS man-ABS hit-NFUT ‘you all hit the man’ b. ɲurra-na baŋgul yaᶉa-ŋgu balga-n you.all-ACC CLF.ERG man-ERG hit-NFUT ‘The man hit you all.’ In (a) attention flow and viewpoint coincide with each other. Thus, both nominals are zero-coded, thereby indicating that the situation described is a natural one. In (b), in contrast, the two parameters do not coincide, whereby the nominals are overtly coded. The use of non-zero coding indicates that the situation depicted in (b) is an unnatural one. The difference between (a) and (b) can be schematized as in Figure .. (Note that the attention flow is represented by a thick arrow, and the natural viewpoint by a thin arrow.) Recall that in the case of the Nominal-Hierarchy-based split-ergativity the direction of the natural viewpoint is determined ultimately by the concentric circles of egocentrism in Figure ..

(31a′)

urra ‘you.all’

yaŗa ‘man’

(31b′)

urra ‘you.all’

yaŗa ‘man’

Figure 11.5 Schematization of the Nominal-Hierarchy-based split-ergativity



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

In the diagram of (a0 ), the two arrows are pointed in the identical direction, while that is not the case of (b0 ). The nominals in (b) (= (b0 )) are both overtly coded because there is a conflict between attention flow and viewpoint. The nominals in (a) (= (a0 )), in contrast, are zero-coded because there is no such conflict. The hierarchical system is also explained in a similar fashion. As with the Nominal Hierarchy-based split-ergative system viewpoint is assigned more or less on the basis of the concentric circles of egocentrism in Figure .. The natural viewpoint is thus with the speech act participants. In the hierarchical system, the verb in the transitive clause bears overt inverse coding when P is a speech act participant and A is not. Conversely, when A is a speech act participant and P is not, the verb bears zero direct coding—in some languages, however, both direct and inverse coding may have non-zero realizations (e.g. Plains Cree in ()). Moreover, in languages with this type of system it is usual to find a further distinction within the category of the speech act participants. In some languages (e.g. Jyarong, Nocte (Northern Naga; Sino-Tibetan: India)), first person outranks second person, while in others (e.g. Plains Cree, Potawatomi, and other Algonquian languages) the latter outranks the former. The following examples come from Jyarong. ()

Jyarong (Tibeto-Burman; Sino-Tibetan: China) a. nga mə nasno-ng I he scold- ‘I will scold him.’ b. mə-kə nga u-nasno-ng he-ERG I INV-scold- ‘He will scold me.’

In (a) attention flow and viewpoint coincide (i.e. a natural situation), whereby the verb bears zero direct coding. In (b), these two come into conflict (i.e. an unnatural situation) with the effect that the verb is marked by the inverse prefix u-. Also note that this presence or absence of the non-zero inverse coding finds an exact parallel in the appearance of ergative marking -kə on the nominal in A role in (b), as opposed to (a). The difference between (a) (= (a0 )) and (b) (= (b0 )) can be schematized in Figure .. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 7 C A S E M A R K I N G A S A N I N T E R A C T I O N

(32a′) nga ‘I/me’

mә ‘he/him’

(32b′) nga ‘I/me’

mә ‘he/him’

Figure 11.6 Schematization of the hierarchical system

Finally, the active–stative system can similarly be understood in terms of the presence or absence of the conflict between attention flow and viewpoint. Recall that in this system some Ss are treated in the same way as A (e.g. (b); cf. (a)), and other Ss as P (e.g. (c); cf. (a)), as illustrated by the data from Eastern Pomo. () Eastern Pomo (Pomoan; Hokan: US) a. mí˙p’-Ø mí˙p-al šá˙k’a he-NOM he-ACC killed ‘He killed him.’ b. mí˙p’-Ø káluhuya he-NOM went.home ‘He went home.’ c. mí˙p-al xá ba˙kú˙ma he-ACC water fell ‘He fell in the water.’ In (b) the referent of the nominal in S role has volition in his action (i.e. going home), whereas in (c) the referent of the nominal in S role lacks such volition. ‘[T]he onset point of the event . . . [in (b)] is with the actor at the point in time when he first intends the act, rather than the point at which he initiates the action’ (DeLancey : ); otherwise the action will be non-volitional. In (c), in contrast, the ‘event originates somewhere other than with a decision on the part of the actor’ (DeLancey : ). To put it differently, while in both (b) and (c) viewpoint is assigned to the actor—there is no other possible viewpoint—attention flow starts from different sources, i.e. the actor in (b), and someone or something other than the actor in (c), e.g. a sudden gust of wind causing the actor to lose his or her balance. Thus, there is a conflict between attention flow and viewpoint in (c) 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

(= (c0 )), but not in (b) (= (b0 )). As a result, the nominal referring to the actor in (c) is overtly coded for case (i.e. an unnatural situation), whereas the same nominal is zero-coded in (b) (i.e. a natural situation). This difference can be represented in schematized form in Figure .. (The terminal point in (b0 ) or the onset point in (c0 ) is absent in the diagrams because the event is an intransitive one.) (33b′) mí˙p’ ‘he’

(33c′) mí˙p’ ‘he’

Figure 11.7 Schematization of the active–stative system

DeLancey’s theory based on attention flow and viewpoint provides an insightful account of the split-ergative, hierarchical, and active– stative systems. This certainly is a big theoretical improvement over previous accounts in that it has detected the same conceptual basis for a number of seemingly disparate case marking systems. It has been pointed out (e.g. Croft : ), though, that DeLancey’s theory cannot take account of animacy effects other than those associated with the speech act participants. This claim, however, is far from warranted because, as DeLancey (: ) suggests, the scale of ‘empathy’ can be invoked with the effect that, being animate and human, the speaker is understood to empathize more with other human beings than with animals, and more with animals than with inanimates. This is, in fact, precisely what underlies the concentric circles of egocentrism in Figure .. We have so far assumed that the speech act participants constitute a single category in the Nominal Hierarchy. But, as has been shown in relation to the hierarchical system, first person (i.e. the speaker) outranks second person (i.e. the hearer) in some languages, and in others the reverse ranking is observed. Even within the same genealogical group the ranking of these two speech act participants may vary from one language to another. For instance, in the hierarchical system of Plains Cree (Algonqian; Algic: Canada) second person is treated as higher than first person on the hierarchy (cf. () and ()). In Blackfoot (Algonqian; Algic: Canada and US), in contrast, first person ! second 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 8 P - A L I G N M E N T T Y P E S

person is coded as direct, and second person ! first person as inverse. But perhaps it is not unreasonable to treat the speech act participants as a single category in the Nominal Hierarchy with possible crosslinguistic variations of first person ! second person and second person ! first person although Dixon (: –) is of the opinion that there may still be a distinction between first person and second person on the Nominal Hierarchy. As evidence in support, he points to the fact that in the majority of languages in which there is such a distinction it is first person that is located in a higher position than second person.

11.8 P-alignment types The concept of (S-)alignment has recently been extended to ditransitive constructions, which are ‘the most typical three-argument constructions’ (Malchukov et al. : ; also Blansitt ; Dryer ; Siewierska  and : –; Haspelmath a). Ditransitive constructions are built on verbs of physical and also abstract transfer, e.g. ‘give’, ‘lend’, ‘sell’, ‘show’, ‘tell’, as illustrated in (). () a. b. c. d. e.

The woman gave a book to her father. The hospital lent a wheelchair to the patient. The man sold his car to a friend. The child showed her drawing to her parents. The spy told his secret to his mother.

Ditransitive constructions involve three core arguments, namely A, T(heme), and R(ecipient). For instance, in (a) the woman is A, a book T, and her father R. There are other three-argument constructions that are not regarded as ditransitive, as exemplified in (), because they lack one or more of the three core arguments. () a. b. c. d. e.

The man put the cake in the oven. The bank replaced the tellers with ATMs. The councillor accused the mayor of bigotry. The shock jock called the novelist a traitor. The chef substituted yoghurt for the sour cream.

Just as S aligns with A or P in the nominative–accusative or the ergative–absolutive system, respectively, P can also be seen to align 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

with T or R in prototypical ditransitive sentences. For instance, consider the following two sentences: ()

Krongo (Kadugli; Kadu: Sudan) n-àdá-ŋ àʔàŋ bìitì à -káaw -PFV.give-TR I water DAT-person ‘I gave water to the man/woman.’

()

Chamorro (Austronesian: Guam) ha na’i i patgon ni leche he-ERG give ABS child OBL milk ‘He gave the milk to the child.’

In both () and (), T and R arguments are coded distinctly for their roles. In (), the T argument is zero-coded and the R argument coded with the dative. In (), the T argument is coded with the oblique, and the R argument with the absolutive. When these ditransitive sentences are compared with transitive sentences, interesting alignment patterns emerge. Consider: ()

Krongo (Kadugli; Kadu: Sudan) n-àpá-ŋ àʔàŋ káaw y-íkkì -PFV.hit-TR I person M-that ‘I hit that man.’

()

Chamorro (Austronesian: Guam) ha tuge’ i kannastra he.ERG weave ABS basket ‘He wove the basket.’

In Krongo, T is zero-coded in the ditransitive sentence (), just as P is zero-coded in the transitive sentence (). Thus, P and T share argument coding to the exclusion of R, which is overtly coded with the dative prefix à- (i.e. P = T 6¼ R). In Chamorro, P is overtly coded with the absolutive case i in the transitive sentence (), and the same case is also used to code R in the ditransitive sentence (). Thus, P and R share the same argument coding to the exclusion of T, which is coded with the oblique ni (i.e. P = R 6¼ T). To wit, two different alignments of P, T, and R are observed in these two languages. There are five possible alignment types—including the ones just discussed with respect to Krongo and Chamorro—depending on how T and R are coded vis-à-vis P. This is enumerated in schematic form in Figure .. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 8 P - A L I G N M E N T T Y P E S

a. Indirective Type T

P

R

b. Secundative Type T

P

R

c. Tripartite Type P

T

R

d. Horizontal Type T

R

P

e. Neutral P

T

R

Figure 11.8 Five logically possible alignment types of P, T, and R

The Krongo ditransitive construction, illustrated in (), is of the indirective type, since R is coded differently from P and T, which share the same (zero) coding. The Chamorro ditransitive construction, exemplified in (), exhibits the secundative type in that P and R share the same (absolutive) coding to the exclusion of T, which is oblique-coded. The tripartite type, in which P, T, and R are each coded distinctly, is exemplified by Sahaptin. () Sahaptin (Sahaptian; Penutian: US) a. i-q’innun-a-aš ɨwinš-nɨm inay NOM-see-PST-SG man-ERG I:ABS ‘The man saw me.’ b. ináy-naš ɨtayman-a ɨwinš-mí-yaw I-OBJ sell-PST man-GEN-ALL ‘(S)he sold me to the man.’ c. ɨwinš-na pá-ʔtayman-a in-mí-yaw man-OBJ INV-sell-PST I-GEN-ALL ‘(S)he sold a man to me.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

In Sahaptin, P is coded with the absolutive case (a), R with a special allative suffix (b and c), and T with the object suffix (b and c). Thus, P, T, and R are coded differently from each other. The horizontal type, which codes T and R identically, as opposed to P, is not known to exist. For instance, Siewierska (, ) and Haspelmath (, ) do not discuss this P-alignment type (also Malchukov et al. : ). Finally, the neutral type (or double-object type in Haspelmath ) is illustrated by Spoken Eastern Armenian. ()

Spoken Eastern Armenian (Armenian; Indo-European: Armenia) a. jes kez t’es-a I you see-AOR.SG ‘I saw you.’ b. jes kez nram t’v-ec-i I you him give-AOR-SG ‘I gave you to him.’ c. jes nram kez t’v-ec-i I him you give-AOR-SG ‘I gave him to you.’

Note that the form of the second person singular is invariably kez, irrespective of whether its argument role is P (a), T (b), or R (c).

11.9 Distribution of the P-alignment types When we discuss the distribution of P-alignments, we can leave the neutral type out of consideration because this type does not make any case-marking distinctions between P, T, and R; languages with this type of alignment rely on ‘other clues such as word order’ (Malchukov et al. : ). As already mentioned, horizontal alignment is unattested in the world’s languages. While it may be injudicious to rule it out completely as an impossibility in human language, it is safe to conclude that it may be the least frequent alignment type even if it turns out to exist. What about the remaining three alignment types? Unfortunately, we have no robust statistical research that can provide the kind of frequency data that we have witnessed for the S-alignment types (§.; see also Siewierska : – about difficulties in collecting data on P-alignment, as opposed to S-alignment). We do, however, 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 9 D I S T R I B U T I O N O F T H E P - A L I G N M E N T T Y P E S

have relatively limited frequency data. Siewierska (: –) makes a number of impressionistic statements about the frequencies of different P-alignment types, albeit mainly with respect to personal markers, e.g. personal pronouns, person/number agreement markers. Haspelmath () provides statistical data on the distribution of the indirective, secundative, and neutral alignment types, albeit with respect to only one particular ditransitive verb ‘give’. Unfortunately, this verb ‘may be an atypical ditransitive verb, which might be quite exceptional in its properties and not representative for its class’ (Malchukov et al. : ). Moreover, languages are also known to use different alignment types, depending on different ditransitive verbs, e.g. in German, indirective for geben ‘give’ but neutral for lehren ‘teach’ (Malchukov et al. : ). In other words, statistical data collected on the basis of a single ditransitive verb, an atypical one at that, may not be able to give us a broad understanding of the frequency distribution of the different alignment types. Nonetheless, Siewierska’s and Haspelmath’s data can provide us with an initial indication of the distribution of different P-alignment types in the world’s languages. Based on her -language sample, Siewierska (: ) remarks that indirective alignment ‘is overwhelmingly dominant’ in the case of independent person forms (i.e. pronominal words) as well as lexical nominals (i.e. non-pronominal), whereas in the case of dependent person forms (e.g. affixes, clitics) indirective and secundative alignment are both common, with the latter being slightly more frequent than the former. Tripartite alignment is reported to be rare, and this seems to be the case with both independent and dependent person forms (Siewierska : ). Malchukov et al. (: ) agree that tripartite alignment is rare, giving Kayardild (Tangkic: Australia) as one such example. While Siewierska () does not include horizontal alignment in her work, Malchukov et al. (: ) ‘do not know a single clear case of horizontal alignment’. Thus, it may be safe to conclude that horizontal alignment is unattested or that languages with this alignment are yet to be discovered. Based on a sample of  languages, Haspelmath () examines the frequency distribution of three alignment types in terms of case marking on both nouns and verbs:  languages with indirective alignment (%),  languages with secundative alignment (.%),  languages with neutral alignment (.%), and  with a mixture of alignments (.%). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

While Siewierska’s and Haspelmath’s data clearly are limited in their own ways, indirective and secundative alignment are more frequent than tripartite alignment—and horizontal alignment for that matter. This frequency distribution makes perfect sense from the perspective of the discriminatory function of case marking (§..). In ditransitive sentences, both T and R co-occur. Thus, there is the need to distinguish T and R. In the indirective type, T aligns with P to the exclusion of R. This means that T and R are coded differently from each other. In the secundative type, R aligns with P to the exclusion of T. This also means that in the secundative type, T and R are coded differently from each other. Thus, indirective and secundative alignment are expected to be common. Tripartite alignment, in which P, T, and R are coded differently from each other, is overly functional or uneconomical because not only T and R are coded differently but also P is coded differently from T and R, although P occurs with neither T nor R. Thus, tripartite alignment is expected to be uncommon, and indeed it is rare in the world’s languages. Horizontal alignment is completely dysfunctional in that, although they co-occur in the same sentence type, T and R are coded identically, and yet they are distinguished from P, which they never co-occur with. This may explain why this alignment is unattested. We must bear in mind that some languages may employ more than one P-alignment. Recall that Haspelmath () reports that  out of  languages in his sample are of the mixed type. Indeed, some languages may choose to use different alignments, depending on the disparity between T and R in terms of animacy and/or topicality, or on the degree of affectedness of R (Malchukov et al. : –). For instance, in Khanty (Ugric; Uralic: Russia), if T is more topical than R, the alignment type to be used is indirective, while if R is more topical than T, secundative alignment must be used. If neither T nor R is topical, Khanty uses indirective alignment for nominal case marking but neutral alignment in verbal case marking. Moreover, languages may employ different alignments, depending on individual ditransitive verbs used. German was already mentioned as such a language. Malchukov et al. (: ) claim that this kind of ‘lexical split’—different alignments for different verbs— is ‘very common cross-linguistically, if not universal, at least on a broad view of the ditransitive domain’, that is, when ‘one looks beyond prototypical ditransitives’. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 10 V A R I A T I O N S O N P - A L I G N M E N T

11.10 Variations on P-alignment In the domain of S-alignment, in addition to the five logically possible alignment types in Figure ., we have the active–stative and hierarchical types. It is worth asking whether P-alignment has something similar to these two additional S-alignment types. Siewierska (; also : –) considers this very question and demonstrates that hierarchical alignment exists in ditransitive case marking, although it is rare and attested only in dependent person forms. For instance, in Jamul Tiipay whether T or R, together with A, is coded with the portmanteau verbal prefix is determined by the relative ranking of T and R on the person hierarchy of  >  > . Thus, which of the two, T or R, is higher on the person hierarchy must be coded with the portmanteau prefix in question, as illustrated in: () Jamul Tiipay (Yuman; Hokan: Mexico and US) a. nye-wiiw :-see ‘I saw you.’ b. xikay ny-iny-ma some :-give-FUT ‘I’ll give you some.’ c. nyaach maap Goodwill ny-iny-x .SBJ you Goodwill :-give-IRLS ‘I’m going to give you to Goodwill.’ In (b) R outranks T, whereby R is coded in the verb, while in (c) T outranks R, whereby T is coded in the verb. Siewierska (: , : ) also reports that the ditransitive counterpart to the active–stative type is not uncommon. In various European languages, for instance, a small class of verbs such as ‘trust’, ‘believe’, ‘help’, and ‘forgive’ allows P to be coded sometimes like T and sometimes like R, as illustrated in: () Polish (Slavic; Indo-European: Poland) a. jego naprawdᶒ kocham he.ACC really love.SG.PRES ‘Him, I really love.’ b. jemu naprawdᶒ ufam he.DAT really trust.SG:PRES ‘Him, I really trust.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

c. jego jej dam he.ACC her.DAT give.SG.FUT ‘Him, I’ll give to her.’ Thus, in (a) P is coded with the accusative as T is in (c), while in (b) P is coded with the dative, as R is in (c). However, this particular class of verbs is not highly transitive (i.e. non-prototypical transitive verbs); strictly speaking, they fall outside the purview of this chapter. Thus, it is questionable whether European languages such as Polish really have active–stative P-alignment: P-coding is split between T and R, as it were, depending on the transitivity of the sentence. Moreover, in the case of verbal case marking (i.e. verbal agreement) there appear to be no languages that are clear cases of active–stative P-alignment (Siewierska : ).

11.11 S-alignment and P-alignment in combination Now that we have looked at both S-alignment and P-alignment, we are in a position to ask one more question as to whether there is any kind of co-occurrence relationship between S-alignment and Palignment. This is what Siewierska (: ) attempts to address by making a number of observations, albeit without providing statistical data. For instance, in the case of independent person forms, neither neutral nor ergative–absolutive alignment ‘is likely to occur with secundative alignment’. Thus, there are no attested instances of the combinations in question in the case of independent person forms, although they are occasionally attested in full nominals (i.e. non-pronominal). Nominative–accusative alignment seems to be ‘compatible with both indirective and secundative alignment’ (Siewierska : ). In so far as dependent person forms are concerned, all the possible combinations of the major transitive and ditransitive alignments are attested in Siewierska’s () sample. Ergative–absolutive, however, is found to combine more frequently with indirective than with secundative. Nominative–accusative alignment, in contrast, shows no particular preference for either indirective or secundative. Lastly, active–stative alignment is similar to ergative– absolutive alignment in that it occurs more commonly together with indirective than with secundative. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 12 C A S E A L I G N M E N T A N D W O R D O R D E R

11.12 Case alignment and word order In his pioneering work, Greenberg (b: ) observes that, if in a language the verb follows both the nominal subject and nominal object as the dominant order, the language almost always has a case system (i.e. his Universal ). Thus, Greenberg detects a possible correlation between SOV order and the presence of case marking though it has long been assumed that word order is an alternative to case marking for purposes of indicating ‘who is doing X to whom’. Mallinson and Blake (: ) is probably one of the first few typological works to address the correlation between case marking and word order in a statistical sense, based on a sample of  languages. Their findings are presented in Table .. Table 11.5 Case marking and word order VSO

SVO

SOV

[+ case]







[– case]







Total







Note: Based on Mallinson and Blake (1981)

It is glaringly obvious from Table . that SOV is nearly five times more likely to employ case marking (i.e. [+ case]) than not (i.e. [– case]), while SVO is nearly three times more likely to do without case marking than to have it. There are not enough VSO languages to make any firm comment about VSO and case marking, although in Table . they are twice more likely to do without case marking than to have it. But contrary to Mallinson and Blake’s () results and also to the widely held assumption, Nichols (: –) argues that there are no significant correlations between word order and alignment, perhaps except for one significant correlation, namely between non-nominative– accusative and V-initial word order, as shown in Table .. Table 11.6 Dominant alignment and word order in Nichols () Alignment Accusative Ergative+Active–Stative Total

V-initial Verb-medial/final Total 

















 (p  .)

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

But even this ‘correlation’ is eventually dismissed owing to the possibility of it being ‘an accident of geography’—V-initial order and non-nominative–accusative alignment are relatively frequent in the New World, consisting of three large areas, namely North America, Mesoamerica, and South America (Nichols : ). Siewierska (), however, points to the fact that in Nichols’s () work only dominant alignment is taken into account relative to word order, with the dominant alignment type identified in terms of the following criteria—which are, by the way, applied in the order given below (Nichols : ): • • • •

the alignment in the majority of parts of speech; the sole non-neutral type; the alignment of nouns rather than pronouns; in the case of tripartite splits, the most semantic of the types involved, in the following order: hierarchical > active–stative > tripartite > ergative–absolutive > nominative–accusative.

Thus, whether lack of the correlation between word order type and dominant alignment type is also true of the alignment of each of the three different categories, i.e. nouns, pronouns, and verbs, remains to be seen (Siewierska : ).14 This is the very question that Siewierska () makes an attempt to answer by carrying out a detailed investigation, based on a sample of  languages (see Siewierska and Bakker ). Siewierska () addresses at least three different types of potential correlation: (i) the correlation between word order and the occurrence of neutral as opposed to nonneutral alignment; (ii) the correlation between word order and different types of non-neutral alignment; and (iii) the correlation between word order and dominant alignment, in direct comparison with Nichols’s () work. First, Siewierska (: –) discerns a significant correlation between word order and occurrence of neutral vs non-neutral alignment with nouns, pronouns, and verbs—albeit with V-medial and V-initial languages exhibiting more neutral alignment than V-final languages for each of the three categories. She (: ) points out, however, that the ‘correlation’ between word order and neutral/non-neutral alignment in

Siewierska (: ) speaks of agreement rather than verbs since she includes under agreement not only verbal affixes but also clitics and particles, which may not necessarily be adjacent to verbs, e.g. second position clitics. But for the sake of convenience reference is made here, in line with Nichols’s (: ) usage, to alignment of verbs instead of agreement. 14



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 12 C A S E A L I G N M E N T A N D W O R D O R D E R

verbs ‘is heavily dependent on geography’, due to the minimal presence of neutral alignment in verbs in both North America and Eurasia (% and %, respectively)—as opposed to Africa, South East Asia, and Oceania, wherein neutral alignment in verbs is relatively frequent, i.e. both %. To put it differently, it is possible to predict within the realms of statistical significance whether the verb in a given language will exhibit neutral or non-neutral alignment if and when that language comes from any one of the four macroareas in question. Second, Siewierska (: –) suggests that, though there is no significant correlation between word order and non-neutral alignment in verbs, in the case of nouns and pronouns non-nominative– accusative alignment is found to be more common in OV languages than in VO languages. The essence of Siewierska’s statistical data on nouns is captured in Table .. Table 11.7 Nominative–accusative/non-nominative–accusative in nouns and OV/VO typology in Siewierska (: ) Alignment/Macroarea

OV

VO

nominative–accusative

%

%

non-nominative–accusative

%

%

nominative–accusative

%

%

non-nominative–accusative

%

%

nominative–accusative

%

%

non-nominative–accusative

%

%

nominative–accusative

%

%

non-nominative–accusative

%

%

nominative–accusative

%

%

non-nominative–accusative

%

%

nominative–accusative

%

%

non-nominative–accusative

%

%

EURASIA

SOUTH EAST ASIA & OCEANIA

AUSTRALIA-NEW GUINEA

AFRICA

SOUTH AMERICA

NORTH AMERICA

NB: The percentages are computed relative to the instances of OV and VO in each of the six macroareas.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

However, Siewierska (: ) notes that there are a number of factors which vitiate the putative correlation between non-nominative– accusative alignment and OV order. First, though there are no large macroareas in which OV languages do not exhibit non-nominative– accusative alignment, there is not a single representative or language with both non-nominative–accusative alignment and VO order in at least one macroarea, i.e. Africa. Second, in VO languages the predominance of non-nominative–accusative over nominative–accusative alignment is confined to Australia-New Guinea and South America. To wit, the effect of geography seems to be more pronounced in VO languages than in OV languages. One may thus be tempted to suggest on the strength of this observation that the correlation of non-nominative– accusative and OV order is not entirely without substance. But Siewierska (: ) is prudent enough to point out that, though in OV languages non-nominative–accusative is more common or equal to nominative–accusative in four of the six macroareas, the proportion of nominative–accusative as compared to non-nominative–accusative in OV languages in two of the three macroareas that display the predominance of the latter alignment—Australia-New Guinea (% vs %) and South America (% vs %), in contrast to South East Asia and Oceania (% vs  %)—is too low to support the putative correlation of nonnominative–accusative and OV order. What, then, about the obverse possibility, a possible correlation between nominative–accusative and VO order? Siewierska reasons that this also cannot be justified because it is only in two of the six macroareas that the percentage of nominative– accusative alignment in VO languages exceeds that in OV languages. Conversely, in four of the six macroareas the occurrence of nominative– accusative alignment is more frequent in OV languages than in VO. Admittedly, the number of macroareas with the higher percentage of nominative–accusative alignment in VO languages than in OV languages would go up from two to four only if neutral alignment were completely left out of consideration. But Siewierska (: ) is cautiously doubtful whether this increase will ‘constitute sufficient justification for positing a correlation between [nominative–accusative alignment] and VO order’. In sum, there is no clear correlation between non-nominative–accusative and OV word order on the one hand, and between nominative–accusative and VO word order on the other. These considerations lead Siewierska to propose a negative—much 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

11 . 12 C A S E A L I G N M E N T A N D W O R D O R D E R

safer—correlation between non-nominative–accusative and VO order. This correctly predicts, for instance, the observed infrequency of non-nominative–accusative in VO languages outside Australia-New Guinea, the only macroarea in her sample that does display the predominance of non-nominative–accusative (i.e. %) over nominative–accusative alignment (i.e. %). Finally, Siewierska (: –) carries out a brief comparison of her work and Nichols’s () in terms of dominant alignment. She imputes the discrepancy in the findings of the two works—the presence or absence of correlations—to differences in the genealogical and areal make-up of the two samples. For instance, % of the V-initial languages in Nichols’s sample come from the Americas as compared to % in Siewierska’s sample; the American languages with non-nominative–accusative alignment account for % of the total V-initial languages in Nichols’s sample but % in Siewierska’s sample. More importantly, Nichols’s sample—as she herself acknowledges (: )—contains too few (i.e. four) V-initial languages outside the Americas in contrast to Siewierska’s sample, which includes four times more non-American V-initial languages. This is, as the reader will recall, what has eventually led Nichols (: ) to discard the ‘correlation’ of non-nominative–accusative alignment and V-initial order in the first place. But the main source of the discrepancy seems to arise from the fact that in Nichols’s sample % of the dominant alignments have only been based on those associated with verbs, while in Siewierska’s sample only % of the dominant alignments are related to verbs (Siewierska : )—a clear typological bias. As Siewierska’s study shows, however, there is no correlation between verbal alignment and word order type. It may thus perhaps not come as a great surprise that Nichols () does not discern any correlations between alignment and word order on the basis of her sample. These genealogical and areal differences of the samples notwithstanding, Siewierska (: ) demonstrates that there are significant correlations between the alignments of nouns and pronouns and word order, especially between neutral vs non-neutral alignment and word order, and also between non-neutral nominal alignment and word order. These significant correlations—positive or negative—may have gone undetected in Nichols’s study not least because she takes into account dominant alignment rather than individual alignments for nouns, pronouns, and verbs (Siewierska : ). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

11.13 Concluding remarks Case marking has been surveyed from the perspective of S-alignment and Palignment. In the case of S-alignment, which case alignment a language has depends primarily on which of the two, A or P, S aligns with. Similarly, in the case of P-alignment, which case alignment a language exhibits depends primarily on which of the two, T or R, P aligns with. Further variations on case alignment, namely split-ergativity, active–stative, and hierarchical, have also been discussed—the latter two also in the case of P-alignment. Two different views of case marking, discriminatory and indexing, have been explained with the conclusion that both views are needed in order to understand case marking properly. Frequency data on these case alignments have also been provided. Moreover, the three less straightforward Salignment types, namely split-ergative, active–stative, and hierarchical, have also been examined from the perspective of interaction between attention flow and viewpoint. For instance, where there is a conflict between attention flow and viewpoint, overt coding is called for. Finally, the possible correlation between case alignment and word order has been explored.

Study questions 1. Consider the following data from Ngarluma (Western Pama-Nyungan; Pama-Nyungan: Australia) and determine what type of S-alignment the language makes use of. Case marking is deliberately not identified and glossed in the examples. () maŋkul̩ a puŋka-n̪ a child fall-PST ‘A child fell down.’ () maŋkul̩ a t̪ alku- n̪ a yukuruku child strike-PST dog ‘A child struck a dog.’ () yukuru pilya-n̩ a maŋkul̩ ku dog bite-PST child ‘A dog bit a child.’ pan̩ i-n̪ a () ŋali we.DU sit-PST ‘We two sat down.’



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

STUDY QUESTIONS

t̪ alku-n̪ a yukuruku () ŋali we.DU hit-PST dog ‘We two hit a dog.’ () yukuru pilya-n̩ a ŋaliku dog bite-PST we.DU ‘A dog bit us two.’ 2. Consider the following data from Niuean (Oceanic; Austronesian: New Zealand) and determine what type of S-alignment the language employs. Two case-marking particles, he and e, are identified by question marks without being glossed. () ne kai he pusi ia PST eat ? cat that ‘That cat ate the chicken.’ () kua tomo e ugauga PFV drown ? crab ‘The crab drowned.’

e ?

moa chicken

3. Consider the following data from Motuna (East Bougainville: Papua New Guinea) and determine what type of P-alignment the language makes use of. Aanih-ki tangu-m-u-u-ng () nii SG Aanih-ERG slap-OBJ-SBJ-NR.PST-M ‘Aanih slapped me (M).’ ong miika o-m-i-ng () nii SG DEM+M leftover.of.betel.mixture give-OBJ-SBJ-PAUC/PL+IMP ‘Give that betel mixture (in your mouth) to me.’ 4. In some split-ergative languages, the choice between nominative– accusative and ergative–absolutive alignment is conditioned by tense/ aspect. For instance, in Gujarati (Indic; Indo-European: India) nominative– accusative and ergative–absolutive alignment are used in imperfective aspect and perfective aspect, respectively, as illustrated in (). pen khәrid-t-o () a. ramesh Ramesh pen buy-IPFV-M ‘Ramesh was buying a pen.’

hә-t-o AUX-IPFV-M

b. ramesh-e pen khәrid-y-i Ramesh-ERG pen buy-PFV-F ‘Ramesh bought the pen.’ DeLancey (: ) argues that in tense/aspect-conditioned split-ergativity the natural viewpoint is the temporal location of the event itself. In other words, the temporal location of an ‘imperfective’ event (e.g. was buying) is A, which the event is still emanating from, while the temporal location of a ‘perfect’ event (e.g. bought) is P, whose terminal point the event has reached. (Note that attention flow is the same in both (a) and (b), that is from A to P.) How does this explain the zero-coding of A in (a) and the overt coding of A in (b)?



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

CASE ALIGNMENT

5. In Spanish (Romance; Indo-European: Spain), inanimate P arguments are zero-coded while animate (and specific) P arguments are coded with the preposition a, as contrasted in (a) and (b). jefe busca una solución () a. El the man seeks a solution ‘The boss is looking for a solution.’ b. El jefe busca a su esposa the boss seeks PREP his wife ‘The boss is looking for his wife.’ Now, consider the ditransitive sentence in (), where the same preposition a codes the R argument. () El jefe (le) dio the boss (DAT.CLT) gave ‘The boss gave his wife a car.’

un a

coche car

a PREP

su his

esposa wife

On the basis of (b) and (), we may conclude that Spanish has secundative alignment, which groups P and R to the exclusion of T (since P and R are coded with the same preposition, while T is zero-coded). On the basis of (a) and (), however, we may conclude that Spanish has indirective alignment, which groups P and T to the exclusion of R (since both P and T are zero-coded, while R is coded by the preposition). Malchukov et al. () decide that Spanish has indirective, not secundative, alignment. Do you agree with their decision? Why do you (dis)agree? Further reading Bickel, B. and Nichols, J. (). ‘Case Marking and Alignment’, in A. Malchukov and A. Spencer (eds.), The Oxford Handbook of Case. Oxford: Oxford University Press, –. Blake, B. J. (). Case. nd edn. Cambridge: Cambridge University Press. DeLancey, S. (). ‘An Interpretation of Split Ergativity and Related Patterns’, Language : –. Dixon, R. M. W. (). Ergativity. Cambridge: Cambridge University Press. Malchukov, A., Haspelmath, M., and Comrie, B. (). ‘Ditransitive Constructions: A Typological Overview’, in A. Malchukov, M. Haspelmath, and B. Comrie (eds.), Studies in Ditransitive Constructions: A Comparative Handbook. Berlin: Mouton de Gruyter, –. Siewierska, A. (). ‘Person Agreement and the Determination of Alignment’, Transactions of the Philological Society : –. Silverstein, M. (). ‘Hierarchy of Features and Ergativity’, in R. M. W. Dixon (ed.), Grammatical Categories in Australian Languages. Atlantic Highlands, NJ: Humanities Press, –.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 Grammatical relations

12.1 Introduction



12.2 Agreement



12.3 Relativization



12.4 Noun phrase ellipsis under coreference



12.5 Hierarchical nature of grammatical relations



12.6 Concluding remarks



12.1 Introduction In Chapter , we made use of the notions of S, A, P, T, and R in order to describe the different ways that the case marking of argument roles is organized in the world’s languages. For instance, in nominative– accusative alignment S and A share the same coding to the exclusion of P (i.e. S = A 6¼ P), while in ergative–absolutive alignment S and P share the same coding to the exclusion of A (i.e. S = P 6¼ A). Case marking is not the only grammatical domain that evinces different alignments of S, A, P, T, and R. For instance, grammatical rules or constraints, e.g. verbal agreement, relativization, interclausal noun phrase ellipsis, control construction may also make reference to S and A in opposition to P, or to S and P in opposition to A. Traditionally, when grammatical rules or constraints are limited to certain argument roles, grammatical relations such as subject (i.e. S and A) and object (i.e. P) are invoked in order to formulate such grammatical rules or constraints. For instance, in English the choice between different pronoun forms, e.g. he vs him, she vs her, they vs them, is determined by the grammatical relation of the relevant argument, as illustrated in ().

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

() English (Germanic; Indo-European: UK) a. He kicks her. b. *Him kicks she. c. She kicks him. d. *Her kicks he. e. He runs. f. *Him runs. g. She runs. h. *Her runs. The (nominative) pronoun forms he and she are used for A or S while the (accusative) pronoun forms him and her are used for P. In other words, the selection of correct pronoun forms is done on a nominative–accusative basis (i.e. S = A 6¼ P). The same grouping of S and A in opposition to P, as exemplified for case marking in (), can also be seen in the obligatory presence on the verb of the person/ number/tense suffix -s in English, as also illustrated in (). The verb must host the person/number/tense suffix -s, when the S or A nominal is third person singular. Thus, if the number of the P nominal in (a) is changed to the plural, the presence of the suffix -s on the verb is not affected (a), but if the number of the S or A nominal is changed to the plural, the suffix is no longer required (b, c, d, and e). This points to the S or A nominal as the trigger of the suffix -s. () English (Germanic; Indo-European: UK) a. He kicks them. b. They kick him. c. *They kicks him. d. They run. e. *They runs. Since they operate the same way with respect to these and other grammatical rules, S and A are traditionally referred to collectively as subject. Thus, in English the verb must overtly code its agreement with the subject of the clause in the present tense if the subject is third person singular—this is how the agreement rule in question is formulated. When formulating such grammatical rules, we may need to extend A to argument roles other than the agent role, because nonagent arguments also can bear the subject relation, as in: 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 1 I N T R O D U C T I O N

()

English (Germanic; Indo-European: UK) a. The boy likes the girl. b. The boy receives a gift from the girl. c. This key opens the garage door.

The nominal the boy in (a) has the argument role of experiencer, the nominal the boy in (b) recipient, and the nominal this key in (c) instrument. Irrespective of their different argument roles, these nominals all function as the subjects of their respective clauses, triggering the appearance of the suffix -s on the verbs. Recall from §. that A and P (and the other labels for that matter) are regarded basically as syntactic although they have a semantic basis. When we used A and P to study alignment patterns in case marking, we decided to focus on the prototypical instances of A and P, mainly because less typical or non-prototypical A and P show too great an amount of cross-linguistic variation (Haspelmath ). In this narrow sense, A represents the prototypical agent, and P the prototypical patient. Nonetheless A and P may need to extend to non-prototypical instances, e.g. experiencer, recipient, instrument, because grammatical relations cut across argument roles, as illustrated by (). (Note that P may also need to extend to non-patient nominals such as the girl, a gift, and the garage door in ().) It is in this extended sense that A and P are regarded as syntactic notions (Comrie : ). Even if we decided to claim that the presence of the person/number/tense suffix -s on the verb is semantically based—which would prevent us from capturing the generalization that it is the subject that dictates the use of the suffix -s in English—we would not be able to replace the subject relation with the argument roles without a contradiction. For instance, the accusative pronoun form them is selected to code the patient nominal (see (a)). But the same patient nominal appears in a nominative pronoun form when appearing in the passive counterpart of the clause in (a), as in: ()

English (Germanic; Indo-European: UK) They are kicked by him.

In (), the correct person pronoun form is they, not them, because they must be used in place of nominals bearing the subject relation. Moreover, the auxiliary verb, are, agrees with they in number (i.e. as opposed to is, which agrees with a singular subject). Thus, we cannot claim that the argument role of they (i.e. patient) is what the verb agrees with, 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

because the patient role of the argument has nothing to do with agreement. To wit, the grammatical rules discussed here are limited to S and A to the exclusion of P, hence the need to invoke the subject relation. One more grammatical domain in which grammatical relations play a role is word order (see Chapter ). In English, the subject and the object occupy the preverbal and the postverbal position, respectively. This is why English is referred to commonly as an S(ubject)O(bject)V(erb) language. Note that the preverbal position of the subject remains the same irrespective of whether it is an agent (e.g. (a), (d)), experiencer (e.g. (a)), recipient (e.g. (b)), or instrument (e.g. (c)). English and other similar languages treat S and A alike whether in terms of case marking or other grammatical rules or constraints. For this reason, these two argument roles are typically subsumed under one and the same grammatical relation, namely subject. However, in other languages the subject relation may not be relevant or appropriate. There are two possible situations. First, there may be languages in which grammatical rules or constraints do not operate in terms of grammatical relations but have a semantic basis instead (e.g. Van Valin and LaPolla : –). For instance, in Acehnese the verb is overtly coded for person. Consider: () Acehnese (Malayo-Sumbawan; Austronesian: Indonesia) a. gopnyan ka-ji-poh-geuh SG INCH-A-hit-P ‘(S/he) hit him/her.’ b. ji-jak gopnyan A-go SG ‘S/he goes.’ c. gopnyan rhët-geuh SG fall-P ‘S/he fell.’ In Acehnese, the person coding on the verb is determined on the basis of argument roles. In (a), both the A and P nominals are overtly coded on the verb in terms of person. Where there is only one core argument (i.e. an intransitive sentence), the verb makes a choice between A- or P-coding, depending on the semantic role of the sole argument (see §.. for this type of alignment). In (b), the sole argument is agentlike, so the verb carries the same prefix ji- as with the A argument of the 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 1 I N T R O D U C T I O N

transitive verb poh ‘hit’ in (a). In (c), the role of the sole argument is patient-like, motivating the verb to carry the same suffix -geuh as with the P argument of the transitive verb poh ‘hit’ in (a). The same semantic basis for Acehnese verbal agreement is also observed structures in which the argument in the subordinate clause is unexpressed when understood as identical to some other argument in the main clause (i.e. so-called control construction). Consider: ()

Acehnese (Malayo-Sumbawan; Austronesian: Indonesia) a. geu-tém taguen bu .A-want cook rice ‘S/he wants to cook rice.’ b. gopnyan geu-tém jak SG .A-want go ‘S/he wants to go.’ c. *gopnyan geu-tém rhët SG .A-want fall ‘S/he wants to fall.’

The understood nominal must be either an agent (in a transitive sentence) or at least agent-like (in an intransitive sentence), as in (a) or (b), respectively. If the understood nominal is a patient or patientlike, we get an ungrammatical sentence, as in (c). Thus, the Acehnese control construction, just like its verbal agreement, operates on the basis of argument roles, not on the basis of grammatical relations. Second, while grammatical relations such as subject are useful and indeed necessary when we describe grammatical rules or constraints in languages such as English, there are languages in which S and A do not constitute a single grammatical relation. For instance, recall from §.. that Avar verbal agreement operates on an ergative–absolutive basis, as illustrated in: ()

Avar (Avar-Andic-Tsezic; Nakh-Daghestanian: Azerbaijan and Russia) a. w-as w-ekér-ula M-child.NOM M-run-PRES ‘The boy runs.’ b. inssu-cca j-as j-écc-ula (M)father-ERG F-child.NOM F-praise-PRES ‘Father praises the girl.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

In (a) the S argument is coded with the gender prefix w- (masculine) on the verb, whereas in (b) the P argument is represented by the gender prefix j- (feminine) on the verb. In contrast, the A argument in (b) is not represented at all on the verb (i.e. zero coding). The same can be said of the case marking on the argument nominals themselves (i.e. zero case for S and P, and non-zero case for A). In languages like Avar, we cannot say that the verb agrees with the subject nominal, because it agrees with S or P, but not with S or A. Thus, the subject relation, as traditionally understood (i.e. S and A), cannot be applied to Avar and other similar languages. The labels, nominative–accusative and ergative–absolutive, were originally created to refer to case alignments, e.g. nominative–accusative alignment, as well as actual case markers themselves, e.g. nominative, as opposed to accusative, case. However, some linguists have extended the use of the case-marking labels, ergative and absolutive, to refer to grammatical relations, more specifically, S and P, as opposed to A (e.g. Dryer : ; Blake : ; Croft : ). For instance, in Avar the verb is said to agree with the ‘absolutive relation’, while not agreeing with the ‘ergative relation’. But this is a rather unusual, if not anomalous, usage because linguists do not similarly use the labels of nominative and accusative when referring to grammatical relations (but cf. Van Valin and LaPolla (: ), who also extend the label subject to refer to S and P, as opposed to A). They invoke the subject (i.e. S and A) or object (i.e. P) grammatical relation instead. Thus, in this chapter we do not follow the practice of using ergative and absolutive to refer to grammatical relations but we reserve them for purposes of case alignment only. We will instead follow Bickel’s (: ) idea of representing grammatical relations as subsets of argument roles or, more generally, semantic roles (also Van Valin and LaPolla : ), although we may continue to use the subject relation to refer to S and A or the object relation to refer to P, because these two are deeply entrenched in linguistics. Thus, the grammatical relation that participates in Avar verbal agreement is {S, P}, whereas the grammatical relation that operates in English verbal agreement is {S, A}. Only the subset of {S, A} is referred to as subject. Thus, English has the subject relation while Avar does not. It is generally assumed in the context of some grammatical theories that grammatical relations operate uniformly across the board in entire languages. Thus, if the subject relation plays a role in a grammatical domain, it is expected to do the same in other grammatical domains to 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 1 I N T R O D U C T I O N

which grammatical relations have relevance. Moreover, it is assumed that grammatical relations are the same across languages. For instance, subject is claimed to be the same in all languages with grammatical relations. These assumptions do not seem to be supported by crosslinguistic data. First, different grammatical rules or constraints make reference to different subsets of argument roles, that is, different grammatical relations, in one and the same language. For instance, in Nepali, the verb agrees with the subject nominal (i.e. {S, A} vs {P}), but nominal case marking operates on an ergative–absolutive alignment basis (i.e. {S, P} vs {A}). This is illustrated in: ()

Nepali (Indic; Indo-European: Nepal) a. ma ga-~e SG go-SG.PST ‘I went.’ b. mai-le timro ghar dekh-~e SG-ERG your house see-SG.PST ‘I saw your house.’

In Nepali, thus, different grammatical rules refer to different subsets of argument roles. In other words, it is not a case of one single subset of argument roles for all grammatical rules (for the domain-specific nature of grammatical relations, see Dryer ; Croft ; and Bickel ). Further examples of this kind of ‘mismatch’ come from languages such as Amharic and Burushaski, which organize the case marking of the T and R nominals on an indirective basis (P = T 6¼ R; see §.), while allowing their verbal agreement to operate on a secundative basis (P = R 6¼ T; see §.), as illustrated in: () Amharic (Semitic; Afro-Asiatic: Ethiopia) ləmma lə-lɨǰ-u məs’haf sət’t’-ə(-w). Lemma to-child-DEF book give-PERF-M(-M.OBJ) ‘Lemma gave the book to the child.’ () Burushaski (isolate: Pakistan) ja do:lʌt uyo:n u:ŋ-ər gu-či-ʌm I.ERG wealth whole you-DAT SG.OBJ-give-PST ‘I have given you all my wealth.’ Moreover, more than one subset of argument roles may need to be invoked to account for one and the same grammatical domain. For 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

instance, in languages with the split-ergative system, which alignment system should be used depends on what types of nominal, e.g. pronouns, human nouns, inanimate nouns, are involved (§..). For instance, in Dyirbal the first and second person pronouns are casemarked on a nominative–accusative basis (i.e. {S, A} vs {P}), while all other types of nominal are case-marked on an ergative–absolutive basis (i.e. {S, P} vs {A}). To wit, relevant subsets of argument roles may be different not only for different grammatical domains but also even within one and the same grammatical domain. Second, the fact that different grammatical rules or constraints may refer to different subsets of argument roles in one and the same language suggests strongly that grammatical relations cannot be the same across languages. In some languages, the subject relation may be the basis on which a given grammatical domain operates, but in other languages the same grammatical domain operates on the basis of a different subset of argument roles, i.e. a different grammatical relation. Put differently, one and the same grammatical relation may have different grammatical properties in different languages. This will be further illustrated throughout the remainder of this chapter. When their distribution in grammatical domains—whether within individual languages or across languages—is discussed, grammatical relations are typically arranged in the form of a hierarchy, with the subject occupying the topmost position, as in (a). Following Bickel (), we can also represent the subject as the subset of {S, A}, the object as the subset of {P}, and the oblique as the subset of {oblique}, as in (b).1 ()

a. subject > object > oblique b. {S, A} > {P} > {oblique}

The rationale for hierarchizing the grammatical relations in this manner is that the higher the position of a grammatical relation is, the more exemplifying languages there are in which that grammatical relation holds relevance to a given grammatical domain. What this entails for individual languages is that if a given grammatical relation participates in a grammatical rule or constraint, so will the higher grammatical relations. To put it differently, if the object relation is 1

The oblique is a cover term for various non-core, peripheral semantic roles such as location, time, and instrument.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 1 I N T R O D U C T I O N

relevant to a grammatical rule, so is the subject relation; if the oblique relation participates in a grammatical rule, the higher grammatical relations, i.e. subject and object, do likewise. To wit, the hierarchy of grammatical relations is an implicational hierarchy. If there is a need to recognize two different types of object, the hierarchy in (a) can be revised to: () subject > direct object > indirect object > oblique Further modification to the hierarchy can be envisaged (see also §.). For instance, indirect object may not be appropriate for some languages, because, as has already been discussed in §., either T or R can align with P in the domain of case marking. This entails that the hierarchy can be modified to (a) for the domain of case marking in particular. Moreover, in languages that operate on the basis of {S, P} vs {A}, a different hierarchy may need to be formulated, as in (b). For instance, we have already discussed a language in which the subset of {S, P}, in opposition to {A}, participates in case marking, i.e. Avar in () or Nepali in (). ()

a. {S, A} > {P, T} or {P, R} > {oblique} b. {S, P, T} or {S, P, R} > {A} > {oblique}

Note that in (), depending on which of the two, T and R, aligns with P, either T or R joins the subset of {oblique} (see Bickel : ). For the remainder of this chapter, we will look at three grammatical domains in which grammatical relations play a crucial role, namely verbal agreement, relativization, and interclausal noun phrase ellipsis. We will see how these grammatical rules operate on the basis of grammatical relations. That is, it depends on the grammatical relation of a given nominal whether the nominal triggers agreement with the verb, whether relative clauses can or cannot be formed on the nominal, and whether the nominal, after its appearance in the first clause, can be left unexpressed in subsequent conjoined clauses. Moreover, in the case of verbal agreement and relativization, grammatical relations behave, within as well as across languages, in a hierarchical manner. That is, higher grammatical relations are more likely to participate in verbal agreement or relativization than lower ones are. Finally, we will briefly discuss, in terms of processing, why some grammatical relations hold more ‘relevance’ to grammatical rules or constraints than others. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

12.2 Agreement Agreement refers to the presence in a linguistic expression (aka target) of grammatical, semantic, or other information (aka features), such as person, number, gender, or social status, the source of which lies with another linguistic expression (aka controller) in a grammatical context (aka domain), e.g. phrase or clause (see Corbett  for a cross-linguistic study of agreement, and Siewierska : – on person agreement). There are different kinds of agreement, e.g. between nominals and verbs, between nouns and attributive adjectives, and the domain of agreement to be discussed in this section is the clause. Typically, agreement in this domain takes place between a nominal (as controller) and a verb (as target), and agreement is normally thought to be on the basis of grammatical relations (e.g. Blake : ; but see below). Moreover, agreement is commonly cited as a phenomenon in which grammatical relations operate in a hierarchical fashion: for instance, if the verb agrees with the object, it also agrees with the subject, but not necessarily the other way round (§..). The appearance of the suffix -s on the verb in English, discussed in §., is an example of subject–verb agreement. Languages, especially Indo-European ones, normally have verbal agreement with the subject (or {S, A}) only. Latin provides another example. ()

Latin (Italic; Indo-European) a. puer labora-t boy work-SG ‘The boy is working.’ b. magister puer-um percuti-t teacher boy-ACC strike-SG c. magistrī puer-um percuti-unt teachers boy-ACC strike-PL ‘The teachers strike the boy.’

In Latin, the verb must agree with the subject in terms of person and number. There are, of course, non-Indo-European languages in which the verb agrees with the subject only. Evenki is one such example, as illustrated in: ()

Evenki (Tungusic; Altaic: Russia) a. er(-il) beje-l eme-cho-tyn this(-PL) man-PL come-PST-PL ‘These people came last winter.’ 

tar that

tugeni-du winter-DAT

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 2 A G R E E M E N T

b. beje homo:ty-va va:cha-n man bear-ACC.DEF kill-PST-SG ‘The man killed the bear.’ In (), the verb agrees with the subject, regardless of whether it is S (a) or A (b). In languages such as Swahili, the verb agrees not only with the subject (as indicated by the prefix a-) but also with the (direct) object (as indicated by the prefix m-), as exemplified in: () Swahili (Bantoid; Niger-Congo: Tanzania) Ali a-na-m-penda m-wanamke m-rembo Ali SG.SBJ-PRES-SG.OBJ-love M-woman M-beautiful ‘Ali loves a beautiful woman.’ Tawala is another language in which the verb agrees with both the subject and the (direct) object, as in: () Tawala (Oceanic; Austronesian: Papua New Guinea) a. ataima wawine-na i-geleta u today woman-DEF SG.SBJ-appear LOC ‘That woman arrived at the village today.’ b. kedewa kamkam i-uni-hi dog chicken SG.SBJ-kill-PL.OBJ ‘A dog killed the chickens.’

meyagai village

In languages such as Georgian, the verb agrees not only with the subject and direct object but also with the indirect object. () Georgian (Kartvelian; Kartvelian: Georgia) a. Merabi amtknarebs Merab-NOM he-yawns-IND ‘Merab is yawning.’ b. deda k̦abas k̦eravs šentvis mother-NOM dress-DAT she-sews-it-IND you-for ‘Mother is sewing a dress for you.’ c. Rezo mačukebs samajurs (me) Rezo-NOM he-gives-me-it-IND bracelet-DAT (me-DAT) ‘Rezo is giving a bracelet to me.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

A few languages are known to have agreement not only with the subject (a, b, c, and d), direct object (b), and indirect object (c) but also with the oblique (d), as in: ()

Pintupi-Luritja (Western Pama-Nyungan; Pama-Nyungan: Australia) a. kungka-Ø=pula ngarangu girl-NOM-DU.SBJ stood ‘The two girls stood.’ b. kungka-Ø=pulanya=Ø nyangu girl-NOM=DU.OBJ=SG.SBJ saw ‘He saw the two girls.’ c. waṯunuma-tjanu-ku=li=ra ngurrinma goanna-origin-DAT=DU.INCL.SBJ=SG.IO searched ‘We two searched for the goanna (which had eaten grubs).’ d. ngurra-wana-Ø=tjananya=pula ngarama minyma camp-along-NOM=PL.COM=DU.SBJ stood women piṉ i-ngka many-COM ‘Those two stood in the camp with the women.’

Note that in () the agreement expressions—e.g. one for the subject (i.e. =pula), and the other for the oblique/comitative (i.e. =tjananya) in (d)—are enclitics, not suffixes, that attach themselves to the first constituent of the clause, be it a verb or whatever else. In (d), for instance, the agreement expressions are cliticized to the end of the locative expression, which, in turn, hosts its own oblique suffix -wana. There are also languages, albeit small in number, where the verb agrees with {S, P} to the exclusion of {A}. One such example has already been discussed, namely Avar, as repeated in: ()

Avar (Avar-Andic-Tsezic; Nakh-Daghestanian: Azerbaijan and Russia) a. w-as w-ekér-ula M-child.NOM M-run-PRES ‘The boy runs.’ b. inssu-cca j-as j-écc-ula (M)father-ERG F-child.NOM F-praise-PRES ‘Father praises the girl.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 2 A G R E E M E N T

While confining herself to person agreement, Siewierska (: –) also lists Karitiâna, Kolana (Kolana-Tanglapui; Timor-Alor-Pantar: Indonesia), Lak (Daghestanian; Nakh-Daghestanian: Russia), Palikur (Eastern Arawakan; Arawakan: Brazil), Trumai (isolate: Brazil), and potentially Canela Kraho (Je; Nuclear-Macro-Je: Brazil) as languages in which the verb agrees with {S, P} to the exclusion of {A}. For instance, consider: () Karitiâna (Arikem; Tupian: Brazil) a. yn a-ta-oky-j an I SG-DEC-hurt-IRLS you ‘I will hurt you.’ b. an y-ta-oky-t yn you SG-DEC-hurt-NFUT me ‘You hurt me.’ c. y-ta-opiso-t yn SG-DEC-listen-NFUT I ‘I listened.’ d. a-ta-opiso-i an SG-DEC-listen-NFUT you ‘You listened.’ What the foregoing examples demonstrate is that while it typically involves the primary or topmost grammatical relation (namely {S, A} or {S, P}), agreement may ‘extend’ its coverage to non-primary or lower grammatical relations in a continuous manner. So much so that if there is agreement with the oblique, we can say that there is also agreement with the subject, the direct object, and the indirect object; if there is agreement with the indirect object, we can say that there is also agreement with the subject and the direct object; and if there is agreement with the direct object, we can say that there is also agreement with the subject. The converse seems to be infrequently attested, e.g. agreement with the direct object but not with the subject; Siewierska and Bakker (: ), for instance, find only two such languages—Barai (Koiarian; Trans-New Guinea: Papua New Guinea) and Waurá (Central; Maipurean: Brazil)—in their sample of  languages. This strong tendency is indeed what makes the hierarchy of grammatical relations implicational in the first place. Nonetheless reference to grammatical 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

relations may not always be sufficient when accounting for agreement in some languages. In Russian, for instance, it is not just the subject relation that is relevant to verbal agreement but in order to trigger verbal agreement, the subject nominal must also be in the nominative. In Hindi, the verb agrees with the subject if it is in the nominative, but it agrees with the direct object if it is the direct object, instead of the subject, that is in the nominative. (If neither the subject nor the direct object is in the nominative, the verb exhibits default agreement, i.e. masculine singular.) Thus, reference not only to grammatical relations but also to case may be necessary to explain how agreement works in some languages (Corbett : –). In her cross-linguistic study (based on a sample of  languages), Siewierska (: –) makes some interesting observations about verbal person agreement. (Needless to say, whether these observations can be extended to languages with verbal agreement in terms of gender and/or number remains to be seen.) First, all languages with person agreement on intransitive verbs also have person agreement on transitive verbs, as also noted earlier by Moravcsik (: ). Second, person agreement with both A and P is more frequent than person agreement with either A or P only, while person agreement with A to the exclusion of P is more common than person agreement with P in preference to A. The relevant statistical data are presented in Table .. Table 12.1 Person agreement in transitive clauses relative to case alignment

Argument role

Nom–Acc n = 

Erg–Abs n = 

Active–Stative n = 

Hierarchical n=

 (.%)

 (%)

 (%)

 (%)

A

 (.%)

 (%)

 (%)

 (%)

P

 (.%)

 (%)

 (%)

 (%)

 (%)

 (%)

 (%)

 (%)

A and P

A or P

The relative infrequency of person agreement with P in preference to A is not totally unexpected because person agreement based on {S, P} to the exclusion of {A} is not common, as already been pointed out (e.g. Avar, Karitiâna). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 3 R E L A T I V I Z A T I O N

12.3 Relativization Relativization is also a grammatical phenomenon in which grammatical relations have been shown to play an important role. The relative clause construction—the output of relativization—consists of two components: the head noun and the restricting clause. The semantic function of the head noun is to establish a set of entities, which may be called the domain of relativization, whereas that of the restricting clause is to identify a subset of the domain by imposing a semantic condition on the domain of relativization referred to by the head noun. In (), the head noun is the girl, and the restricting clause whom Mrs Freundlich had taught. () The girl whom Mrs Freundlich had taught won a university scholarship. In (), the domain of relativization is denoted by the head noun the girl. This domain of relativization is then ‘narrowed down’, as it were, to the subset that can satisfy the condition expressed by the restricting clause whom Mrs Freundlich had taught. It is in this sense that the restricting clause has traditionally been understood to modify the head noun, hence the alternative label of the attributive clause. Based on a sample of about fifty languages, Keenan and Comrie () demonstrate that, although they vary with respect to which grammatical relations can or cannot be relativized on (in other words, which grammatical relation the head noun can bear inside the restricting clause), languages may not do so randomly. There are regular patterns in the cross-linguistic variation instead. For instance, there are no languages in their sample that cannot relativize on the subject although there are languages which can relativize on the subject only. Thus, all languages must have at least one relativization strategy whereby subjects are relativized on. This relativization strategy is referred to as the primary strategy. There is also a very strong tendency for relativization strategies to apply to a continuous segment of the hierarchy of grammatical relations or of what Keenan and Comrie () refer to as the Accessibility Hierarchy (AH hereafter), presented in (). () Accessibility Hierarchy subject > direct object > indirect object > oblique > genitive > object of comparison 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

The primary strategy, which must by definition apply to the subject relation, may also continue to apply down to ‘lower’ relations on the AH, and at the point where it ceases to apply, other relativization strategies may or may not take over and apply to a continuous segment of the AH. English is one of the rare languages which can almost freely relativize on all the grammatical relations on the AH. This language thus serves as a good example by which the AH can be illustrated with respect to relativization. Consider: ()

English (Germanic; Indo-European: UK) a. the girl who swam the Straits of Dover [subject] b. the girl whom the boy loved with all his heart [direct object] c. the girl to whom the boy gave a rose [indirect object] d. the girl with whom the boy danced [oblique] e. the girl whose car the lady bought for her son [genitive] f. the girl who the boy is taller than [object of comparison]

Note that in () the grammatical relations that the head nouns bear in the restricting clauses are specified within square brackets. The majority of the world’s languages, however, are not so generous as English in their ability to form relative clauses. In fact, the very nature of the AH is grounded on the observation that there are more languages which can relativize on the subject than languages which can also relativize on the direct object, on the direct object than also on the indirect object, on the indirect object than also on the oblique, and so forth. This suggests strongly that ‘the further we descend the AH the harder it is to relativize’—with the AH perhaps reflecting ‘the psychological ease of comprehension’ (Keenan and Comrie : ; cf. Hawkins , , and ; see also §.). While this observation remains to be substantiated empirically by using larger language samples, Comrie and Kuteva (a and b) show that all languages in one of their two samples ( language) can relativize on the subject while ten languages in their other sample ( languages) cannot relativize on the oblique. Unfortunately, relativization on the other grammatical relations is outside the purview of Comrie and Kuteva’s (a and b) investigation. Two additional but related points follow from the preceding discussion. First, if it applies to a given grammatical relation on the AH, the primary strategy must then apply also to all ‘higher’ grammatical relations. For example, if the primary strategy in language X is known 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 3 R E L A T I V I Z A T I O N

to relativize on the genitive, then a prediction can be made to the effect that it will also be able to relativize on the subject, direct object, indirect object, and oblique; if the primary strategy in language Y is known to relativize on the oblique, then a prediction can be made to the effect that it will also be able to relativize on the subject, direct object, and indirect object; and so forth. Second, all relativization strategies including the primary strategy may ‘switch off ’ at any point on the AH but they should in principle not ‘skip’ on the AH. The validity of the AH can be justified if and when each grammatical relation of the AH proves to be the point at which primary strategies actually cease to apply, and also if non-primary relativization strategies commence to apply at, and to a continuous segment lower than, that point—provided, of course, that there are non-primary relativization strategies in use. Keenan and Comrie (, ) provide evidence in support of the AH in the manner just described. In many Western Malayo-Polynesian (Austronesian) languages including Javanese (Indonesia), Minangkabau (Indonesia), Malagasy (Madagascar), and Toba Batak (Indonesia) only subjects can be relativized on by the primary strategy. Also worth mentioning is that many European languages have participial relativization strategies which may relativize on the subject relation only. The following examples come from Malagasy, the basic clausal word order of which is VOS. () Malagasy (Barito; Austronesian: Madagascar) a. ny mpianatra izay nahita ny vehivavy the student COMP saw the woman ‘the student that saw the woman’ b. *ny vehivavy izay nahita ny mpianatra the woman COMP saw the student ‘the woman that the student saw’ The relative clause in (b) is totally ungrammatical in the sense that the head noun bears the direct object relation in the restricting clause; (b) can only be grammatical in the sense that the woman that saw the student. In (Literary) Welsh, the primary strategy applies to the subject and the direct object only (a). Other grammatical relations are taken care of by different relativization strategies in these languages. For instance, the non-primary relativization strategy expresses the head noun in the restricting clause by means of an anaphoric pronoun (b). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

()

Welsh (Celtic; Indo-European: UK) a. y bachgen a oedd yn darllen the boy who was a’ reading ‘the boy who was reading’ b. dyma ‘r llyfr y darllenais y stori ynddo here-is the book that I-read the story in-it ‘Here is the book in which I read the story.’

Similarly, one of the primary strategies in Finnish works for the subject and the direct object only, as in: ()

Finnish (Finnic; Uralic: Finland) a. Pöydällä tanssinut poika oli sairas on-table having-danced boy was sick ‘The boy who danced on the table was sick.’ b. Näkemä poika tanssi pöydällä I-having-seen boy danced on-table ‘The boy that I saw danced on the table.’

In Tamil, Basque (isolate: France and Spain), and Roviana (Oceanic; Austronesian: Solomon Islands), the three highest grammatical relations on the AH are relativized on by the primary strategy.2 In Tamil, the primary strategy involves a restricting clause with its verb in participial form -a (a, b, c), while the non-primary strategy retains the head noun in full form in the restricting clause as well as in the main clause (d). ()

Tamil (Southern Dravidian; Dravidian: India and Sri Lanka) a. Jāṉ pāt̩u-kiṟ-a pen̩ maṉi(y)-ai kan̩ -t̩-āṉ John sing-PRES-PART woman-DO see-PST-SG.M ‘John saw the woman who is singing.’ b. anta maṉitaṉ at̩i-tt-a pen̩maṉi(y)-ai jāṉ kan̩-t̩-āṉ that man hit-PST-PART woman-DO John see-PST-SG.M ‘John saw the woman that that man hit.’ c. Jāṉ puttakatt-ai(k) kot̩i-tt-a pen̩ maṉi(y)-ai John book-DO give-PST-PART woman-DO nāṉ kan̩ -t̩-~eṉ I see-PST-SG ‘I saw the woman to whom John gave the book.’

2 In Basque, there is a dialectal variation in whether other grammatical relations can also be relativized on. If they can, the non-primary pronoun-retention strategy comes into play.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 3 R E L A T I V I Z A T I O N

d. eṉṉa(k) katti(y)- āl ̩ kor̩i(y)-ai anta maṉitaṉ which knife-with chicken-DO that man kolaippi-tt-āṉ anta katti(y)-ai jāṉ kan̩ -t̩-āṉ kill-PST-SG.M that knife-DO John see-PST-SG.M ‘John saw the knife with which the man killed the chicken.’ (lit. ‘With which knife the man killed the chicken, John saw that knife.’) Languages which make use of the primary strategy to relativize from subjects all the way down to obliques include Catalan (Romance; IndoEuropean: Spain) and North Frisian (Fering dialect). In North Frisian, genitives are not relativizable, but the higher grammatical relations are accessible to relativization. () Frisian (Germanic; Indo-European: Germany) a. John kland det wuf ’s henk ‘John stole the woman’s chicken.’ b. det henk wat kland John ‘the chicken that John stole’ c. *det wuf wat’s henk John kland ‘the woman whose chicken John stole’ Keenan and Comrie (: ) point out that many well-known European languages, e.g. French, Spanish (Romance; Indo-European: Spain), German (Germanic; Indo-European: Germany), and Romanian (Romance; Indo-European: Rumania) relativize on all grammatical relations including genitives but excluding objects of comparison by means of primary strategies. In French, for instance, it is not possible to relativize on the object of comparison, as in: () French (Romance; Indo-European: France) a. Marie est plus grande que le jeune homme Mary is more big than the young man ‘Mary is bigger than the young man.’ b. *le jeune homme que que Marie est plus grande the young man than whom Mary is more big ‘the young man than whom Marie is bigger’ There are a few languages that relativize on all grammatical relations on the AH. English, Slovenian (Slavic; Indo-European: Slovenia), and 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

Urhobo are listed in Keenan and Comrie (: ) as uncommon across-the-board-relativizing languages. ()

Urhobo (Edoid; Niger-Congo: Nigeria) oshale na l-i Mary rho n-o man the that Mary big than-him ‘the man that Mary is bigger than’

The preceding brief survey demonstrates that primary strategies apply to the subject relation and also possibly down to the ‘lower’ grammatical relations on the AH. Primary strategies may also stop at any point on the AH, and if that happens, and if relativization is permitted to continue even further down the AH, non-primary strategies may take over. There are also languages that regard the subset of {S, P}, instead of the subject relation (i.e. {S, A}), as the topmost grammatical relation on the AH, as in:3 ()

{S, P} > {A} > indirect object > oblique > genitive > object of comparison

In Dyirbal, for instance, the head noun can have almost any grammatical relation in the main clause (including oblique relations with the exception of the allative or ablative). However, the head noun must have the {S, P} relation in the restricting clause. Consider: ()

Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) a. ŋuma-ŋgu yabu-Ø duŋgara-ŋu-Ø buᶉa-n father-ERG mother-ABS cry-REL-ABS see-PST ‘Father saw mother, who was crying.’ b. ŋuma-Ø yabu-ŋgu buᶉa-ŋu-Ø duŋgara-nyu father-ABS mother-ERG see-REL-ABS cry-PST ‘Father, who mother saw, was crying.’ c. ŋuma-Ø buᶉal-ŋa-ŋu-Ø yabu-gu duŋgara-nyu father-ABS see-ANTIP-REL-ABS mother-DAT cry-PST ‘Father, who saw mother, was crying.’

In (a) the argument role of the head noun in the restricting clause is S, while in (b) the argument role of the head noun in the restricting 3 Unfortunately, T and R, in relation to P, have not been examined in terms of accessibility to relativization. We will ignore the alignment of T or R with P in favour of indirect object for the purposes of illustration.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 4 N O U N P H R A S E E L L I P S I S U N D E R C O R E F E R E N C E

clause is P. In Dyirbal, relative clauses can be formed on either of these roles, which constitute the topmost grammatical relation or {S, P}. In (c), in contrast, the ‘original’ argument role of the head noun, ŋuma ‘father’, in the restricting clause is A, but that has changed to S by means of a detransitivizing operation, which in turn is signalled by the addition of the antipassive suffix -ŋa to the verb. The antipassivizing operation turns transitive clauses into intransitive ones, whereby A is changed to S, and P to peripheral status (in this case to a dative-marked nominal) (see §.. on antipassives). In other words, the relative clause in (c) is grammatical because the head noun bears the ‘derived’ role of S—instead of its ‘original’ role of A. Another language that behaves like Dyirbal in this respect is Chukchi (Northern Chukotko-Kamchatkan; ChukotkoKamchatkan: Russia). There also seem to be languages that do not rely on such a detransitivizing operation in order to restrict relativization to S and P to the exclusion of A. In Oirata, for instance, relativization is possible only if the head noun is either S (a) or P (b). If the head noun has the argument role of A (c), relativization is blocked. () Oirata (Timor-Alor-Pantar; Trans-New Guinea: Indonesia) a. inte ihar mara-n asi PL.EXCL.NOM dog go-REL see ‘We saw the dog that had left.’ b. ihar ante asi-n mara dog SG.NOM see-REL go ‘The dog that I saw left.’ c. *ihar ani asi-n mara dog SG.ACC see-REL go ‘The dog that saw me left.’ Incidentally, note that the case marking on the pronouns in Oirata is done on the basis of {S, A} as opposed to {P}—nouns are not casemarked in this language. In other words, different subsets of the argument roles (i.e. different grammatical relations) are used for different grammatical domains. 12.4 Noun phrase ellipsis under coreference When more than one clause is strung together in a single sentence, a nominal mentioned in the first clause may be unexpressed in subsequent 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

clauses. This grammatical phenomenon, known as noun phrase ellipsis under coreference in conjoined clauses, is illustrated by the English examples in (). ()

English (Germanic; Indo-European: UK) a. The man returned and saw the woman. b. The man saw the woman and returned. c. *The woman returned and the man saw. d. The woman returned and was seen by the man.

In (a), one of the two arguments is omitted from the second clause under identity with the (sole) argument the man in the first clause. In other words, (a) must be interpreted as ‘The man returned and the man saw the woman’ (the unexpressed argument with a strikethrough). In (b), the unexpressed argument in the second clause is also interpreted as identical to the argument the man in the first clause; (b) means ‘The man saw the woman and the man returned’. The unexpressed argument in the second clause can never be understood to be identical to the other argument the woman in the first clause. In other words, (b) does not mean ‘The man saw the woman and the woman returned’. In (c), the unexpressed argument in the second clause is supposed to be under identity with the argument the woman in the main clause (i.e. *The woman returned and the man saw the woman), but this is ungrammatical. If, however, the second clause in (c) is changed to a passive (i.e. from the man saw the woman to the woman was seen by the man; see §.. on the passive), the argument the woman can be omitted from the second clause, as in (d). The noun phrase ellipsis under coreference in English, exemplified in (), can be described as something like this: the controller (i.e. the expressed nominal in the first clause) and the target (i.e. the unexpressed nominal in the subsequent clause) of noun phrase ellipsis must be based on the subject relation, that is {S, A}. In other words, both the controller and the target must be either S or A; they cannot be P. Thus, in (a) the controller is S, and the target A (i.e. S = A); in (b) the controller is A, and the target S (i.e. A = S); in (d) both the controller and the target are Ss (i.e. S = S). In the ungrammatical sentence in (c), the controller is S but the target is P (i.e. S 6¼ P). The sentence in () further illustrates that noun phrase ellipsis under coreference can also be between an A controller and an A target (i.e. A = A); () means ‘The man saw the woman and the man rang the bell’. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 4 N O U N P H R A S E E L L I P S I S U N D E R C O R E F E R E N C E

() English (Germanic; Indo-European: UK) The man saw the woman and rang the bell. In languages where the primary grammatical relation is {S, P}, noun phrase ellipsis under coreference may work differently from the way it works in languages such as English. In Dyirbal—which is arguably one of the most studied languages in terms of this and other similar grammatical phenomena—noun phrase ellipsis under coreference in conjoined clauses is carried out strictly on the {S, P} vs {A} basis. That is, the controller and the target of noun phrase ellipsis must be based on S = P, P = S, S = S, or P = P. In other words, coreference must be between the two argument roles that constitute the primary grammatical relation of {S, P}. First, consider the following basic clauses. () Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) a. ŋuma-Ø banaga-nyu father-ABS return-NFUT ‘Father returned.’ b. ŋuma–Ø yabu-ŋgu buᶉa-n father-ABS mother-ERG see-NFUT ‘Mother saw father.’ c. yabu–Ø ŋuma-ŋgu buᶉa-n mother-ABS father-ERG see-NFUT ‘Father saw mother.’ When conjoined with each other, (a) and (b) share one nominal, ŋuma ‘father’. This particular nominal can thus be susceptible to ellipsis under coreference in the second conjoined clause, as in: () Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) yabu-ŋgu buᶉa-n ŋuma-Ø banaga-nyu father-ABS return-NFUT mother-ERG see-NFUT ‘Father returned and mother saw father.’ The controller nominal in the first conjoined clause is in S function, and the target of noun phrase ellipsis in the second conjoined clause is in P function. The order of (a) and (b) can be reversed, as in: () Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) ŋuma-Ø yabu-ŋgu buᶉa-n banaga-nyu father-ABS mother-ERG see-NFUT return-NFUT ‘Mother saw father and father returned.’ 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

In (), again, the coreference is between the nominal in P function in the first conjoined clause, and the nominal in S function in the second conjoined clause. But when (a) and (c) are conjoined in either order, the nominal coreference across the clauses would hold on the S = A or A = S basis— ‘Father (S) returned and father (A) saw mother’ or ‘Father (A) saw mother and father (S) returned’. The outcome of this potential noun phrase ellipsis is ungrammaticality of the conjoined clauses. This is where the antipassive comes into play and contributes to maintenance of the grammatical constraint on noun phrase ellipsis under coreference in Dyirbal—that is, by changing the ‘problematic’ A to S. Thus, (a) and (c) can successfully be conjoined with noun phrase ellipsis in full force owing to the antipassive, as illustrated in: ()

Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) yabu-gu a. ŋuma-Ø banaga-nyu buᶉal-ŋa-nyu father-ABS return-NFUT see-ANTIP-NFUT mother-DAT ‘Father returned and father saw mother.’ yabu-gu banaga-nyu b. ŋuma-Ø buᶉal-ŋa-nyu father-ABS see-ANTIP-NFUT mother-DAT return-NFUT ‘Father saw mother and father returned.’

In (a), the controller nominal in the first conjoined clause is S, and the target nominal (or the unexpressed NP) in the second clause is also S because the latter nominal has been changed from its ‘original’ A function through antipassivization of the second clause. In (b), the nominal ŋuma in the first clause is also converted from A to S via antipassivization so that it can function as the controller of the unexpressed coreferential nominal in the second clause, which is in S function. The antipassive thus ensures that nominal ellipsis under coreference be confined to S or P to the exclusion of A. Other grammatical domains that also operate on the {S, P} vs {A} basis in Dyirbal are relativization (as already discussed in §.) and purposive-type conjoined clauses (see Dixon : – for further discussion). Note that the {S, P} vs {A} alignment, for instance, in noun phrase ellipsis cuts across case marking in Dyirbal. That is, noun phrase ellipsis does not pay heed to the actual case marking of coreferential nominals but operates strictly on the {S, P} vs {A} basis (Dixon : ). Recall from §.. that the first and second person pronouns in Dyirbal are based on nominative–accusative (or {S, A} vs {P}) alignment, and the 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 4 N O U N P H R A S E E L L I P S I S U N D E R C O R E F E R E N C E

other types of nominal on ergative–absolutive alignment (or {S, P} vs {A}). This is illustrated in: () Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) nyurra-ngu a. ŋana-Ø banagu-nyu buᶉal-ŋa-nyu we.all-NOM return-NFUT see-ANTIP-NFUT you.all-DAT ‘We returned and we saw you all.’ b. ŋana-na jaja-ŋgu ŋamba-n buᶉal-ŋa-nyu we.all-ACC child-ERG hear-NFUT see-ANTIP-NFUT nyurra-ngu you.all.DAT ‘The child heard us and we saw you all.’ Dixon (: –) reports that other Pama-Nyungan languages and Mayan languages also show this kind of grammatical alignment of {S, P} vs {A}. The Mayan languages, for instance, operate a number of grammatical rules on the {S, P} vs {A} basis; only nominals in S or P function can be relativized, focused, negated, or questioned, with nominals in A function first needing to change to S through antipassivization. In Chukchi (Northern Chukotko-Kamchatkan; ChukotkoKamchatkan: Russia), the antipassive construction is preferred with some verbs for wh-questions on A, whereas wh-questions on P must be formed in ergative clauses. Not all languages with the ergative–absolutive case alignment system exhibit the {S, P} vs {A} alignment in other grammatical domains. In fact, only a small portion of these languages are known to do so (Dixon : , ). For instance, languages such as Hindi (Indic; Indo-European: India), Basque (isolate: France and Spain), North-East Caucasian languages, and Papuan languages are organized completely on the {S, A} vs {P} basis in syntax, their (partial) ergative–absolutive case-marking morphology notwithstanding. Moreover, in languages which exhibit the {S, P} vs {A} alignment in grammar it is not necessarily the case that all grammatical rules or constraints operate on the {S, P} vs {A} basis. In Chukchi, the infinitive construction is based on the {S, A} vs {P} alignment with the nominals in S or A function unexpressed in the infinitive clause. But in the negative participial construction nominals in S or P function in the participial clause can be relativized on to the exclusion of those in A function. Similarly, in Quiché (Mayan: Guatemala) reflexivization is organized on the {S, A} vs {P} basis to the effect that only A and S can function as the controller of reflexivization, 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

whereas in relativization {S, P} vs {A} alignment is adopted so that it is S and P, not A, that can be relativized on. Dixon (: ) also reports that Yidiny (Northern Pama-Nyungan; Pama-Nyungan: Australia) and Tongan (Oceanic; Austronesian: Tonga) are similar to Chukchi and Quiché with some grammatical rules operating on the {S, A} vs {P} basis, and others on the {S, P} vs {A} basis.

12.5 Hierarchical nature of grammatical relations As we have so far seen, grammatical relations seem to be hierarchically organized in a way that higher grammatical relations are more likely than lower ones to play a role in grammatical domains such as agreement or relativization. Why, for instance, should the subject relation be more ‘relevant’ to grammatical rules or constraints than the object relation? What could possibly be the basis for the hierarchy of grammatical relations? Keenan and Comrie (: ), for instance, appeal, somewhat vaguely, to ‘the psychological ease of comprehension’ in an attempt to explain their cross-linguistic findings on relativization: (relativization on) higher grammatical relations is easier to process than (relativization on) lower ones. Hawkins (: –, : –, : –) interprets the rankings of the grammatical relations on Keenan and Comrie’s () AH in processing terms. Hawkins is in full agreement with Keenan and Comrie’s (: ) claim that the AH directly reflects ‘the psychological ease of comprehension’. But he also argues that Keenan and Comrie () are unable to substantiate their claim because they lack a structural complexity metric with which to quantify the psychological ease of comprehension. Hawkins (: ) proposes that his concept of Minimal Structural Domain does just that: ()

Minimal Structural Domain (Min SD hereafter) The Min SD of a node X in a constituent consists of the smallest set of structurally integrating nodes that are grammatically required to co-occur with X in the constituent.

Thus, the Min SD of the direct object nominal properly includes that of the subject nominal because the former cannot be constructed without the latter. To put it differently, the Min SD of the direct object can be removed from that of the subject Min SD but not vice versa. This is demonstrated schematically in (). 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 5 H I E R A R C H I C A L N A T U R E O F G R A M M A T I C A L R E L A T I O N S

()

S

NPi

VP

V

NPj

The Min SD of NPi (or the subject nominal) consists of S and VP; in other words, S and VP define the subject nominal structurally. The Min SD of NPj (or the direct object nominal), in contrast, consists of S, NPi, VP and V; in other words, at least four nodes must be counted—as opposed to two in the case of the subject nominal—in order to define the structural integration of NPj. Thus, the Min SD of NPj properly includes that of NPi. The thrust of Hawkins’s argument is: the more structurally complex a Min SD is, the more difficult it is to process in psychological terms. For instance, the direct object nominal is more difficult to process than the subject nominal simply because the former involves far more structural nodes to be counted than the latter. Based on this complexity metric, Hawkins (: –) goes on to demonstrate that for each grammatical relation on the AH the Min SD of Xi is always smaller than or equal to that of Xj (where Xi is more accessible to relativization than Xj), and that for most of the grammatical relations there is, in fact, a relation of (proper) inclusion as in between the subject and the direct object, defined in (). This is summarized in (). (Note that Hawkins (: ) does not discuss OCOMP on the AH but examines the genitive of each of the grammatical relations on the AH, not included in () for the sake of convenience (see Study question ).) ()

a. Min SD (subject)  Min SD (direct object) b. Min SD (subject)  Min SD (indirect object), Min SD (direct object)  Min SD (indirect object) c. Min SD (subject)  Min SD (oblique), Min SD (direct object) Min S(oblique), Min SD (indirect object)Min SD (oblique) 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

d. Min SD (subject)Min SD (genitive), Min SD (direct object) < Min SD (genitive), Min SD (indirect object)  Min SD (genitive), Min SD (oblique)  Min SD (genitive) In particular, Hawkins (: –) makes use of this complexity metric—counting syntactic nodes, e.g. S, NP, V—in measuring the dependency between the head noun (or what he calls the ‘filler’) and the grammatical position relativized on (or what he calls the ‘gap’, that is, the grammatical relation borne by the head noun in the restricting clause, e.g. subject, direct object). Hawkins’s extension to the AH of the complexity metric based on the Min SD is promising not only because it provides a simple quantitative method of measuring structural complexity of the grammatical relations but also because it seems to be able to handle languages for which the status of grammatical relation may be in doubt or dispute. There are three issues that deserve some attention, however. First, in languages with {S, P} as the primary grammatical relation, it should be NPi that the grammatical relation of {S, P} maps on to, and A should, in turn, map on to NPj. This entails that the Min SD of P should be smaller than that of A. It is not clear from Hawkins (, , ) how this will be accommodated into conventional constituent structure such as in (). Second, though it may be able to quantify Keenan and Comrie’s accessibility to relativization, Hawkins’s processing-based interpretation begs the very question of why, for instance, the subject relation should in the first place have less structural complexity than the direct object relation. Third, while it makes much sense, in the context of Hawkins’s interpretation that relative clauses formed on the subject are easier to process than relative clauses on the oblique, it does raise the question as to why higher grammatical relations are more likely to be overtly coded in the verb (i.e. verbal agreement) than lower grammatical relations, when the former relations are easier to process than the latter. Shouldn’t it be the case that what is more difficult to process is more likely to be overtly coded in the verb? This suggests that there is more to the hierarchy of grammatical relations than processing ease or difficulty, measured solely in terms of nodes in constituent structure. Indeed Corbett (: –), for instance, mentions redundancy (i.e. facilitating the hearer’s comprehension), reference tracking (i.e. helping the hearer keep track of the different referents in a discourse), and marking constituency (i.e. indicating which elements are 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

12 . 6 C O N C L U D I N G R E M A R K S

constituents) as other possible motivating factors for agreement, concluding that no single factor can explain agreement.

12.6 Concluding remarks Grammatical relations play an important role in a range of grammatical rules or constraints. We need grammatical relations because other concepts such as argument roles do not enable us to describe grammatical phenomena such as verbal agreement in a consistent manner. We have also shown that grammatical relations can be understood to be groupings of argument or semantic roles. For instance, the subject relation stands for the subset of {S, A}. Grammatical relations, however, are not universal or attested in every language. Some languages may not even have grammatical relations, and their grammatical rules can be described by making direct reference to argument roles, for example. Other languages may have grammatical relations but they may group argument roles differently from the way familiar languages do. For instance, in some languages no subject relation may exist, and the primary grammatical relation may instead consist of {S, P}. Moreover, different grammatical relations may participate in different grammatical rules or constraints, both within and across languages. Finally, the hierarchy of grammatical relations may have its roots in processing. Hawkins’s (, , ) work provides an important insight into the nature of the hierarchy of grammatical relations from the perspective of processing ease.

Study questions 1. The genitive relation in the AH is actually more complicated than it has been presented in this chapter. While the ‘possessor’ nominal, as the head noun of the relative clause construction, bears the genitive relation inside the restricting clause, the ‘possessum’ nominal (or, more precisely, the whole possessive phrase) also has its own grammatical relation within the restricting clause. For instance, consider: (1) The woman whose son won the Mandela scholarship . . . (2) The woman whose son Kate dated . . .



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

(3) (4) (5) (6)

The woman whose son Kate gave a book to . . . The woman whose son Kate went to school with . . . The woman whose son’s property Kate bought . . . The woman whose son Kate is taller than . . .

In all of the foregoing relative clauses in English, the head noun (i.e. the possessor) the woman holds the genitive relation in the restricting clause. In contrast, the possessum nominal (the woman’s) son bears different grammatical relations, i.e. subject in (), direct object in (), indirect object in (), oblique in (), genitive in (), and object of comparison in (). Choose two or more languages listed in Keenan and Comrie (: –) which can relativize on the genitive and for which you are able to collect relevant data, and find out what grammatical relation the possessum nominal bears when relative clauses can or cannot be formed on the genitive. Also determine whether the distribution of grammatical, as opposed to ungrammatical, genitive relative clauses follows the predictions of the AH (e.g. possible relativization on genitive–subject, genitive–direct object, and genitive–indirect object but impossible relativization on genitive–oblique, genitive–genitive, and genitive–object of comparison). 2. In Swahili (Bantoid; Niger-Congo: Tanzania), the verb agrees with the subject nominal. Identify the argument/semantic role of the subject nominal in each of the following sentences. What problem(s) would arise if we proposed that the verbal agreement in Swahili is based on semantic roles? ()

Ahmed a-lim-piga Ahmed he-PST-him-hit ‘Ahmed hit Bardu.’

()

kitabu ki-li-anguka kutoka book it-PST-fall from ‘The book fell from the shelf.’

()

Juma a-na-m-penda Juma he-PRES-her-love ‘Juma loves Halima.’

()

simba a-li-ua-w-a na wanakijiji lion he-PST-kill-PASS by villagers ‘The lion was killed by the villagers.’

()

nyundo i-li-vunja dirisha hammer it-PST-beak window ‘The hammer broke the window.’

()

Fatuma a-li-p-ew-a zawadi Fatuma she-PST-give-PASS gift ‘Fatuma was given a gift by Halima.’

Bardu Bardu rafu-ni shelf-LOC

Halima Halima



na by

Halima Halima

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

STUDY QUESTIONS

3. In a grammatical phenomenon known as external possession, the possessor nominal is syntactically a dependent nominal of the verb while interpreted to be a modifier of (one of) the other dependent nominal(s) of the verb, as illustrated in the following Korean examples. In (a), the possessor nominal Keeho appears with the genitive case, forming a possessive phrase together with the (head) possessum nominal ttal. In (b), the same possessor nominal is coded with the nominative case (as one of the dependent nominals of the verb), but understood to be the modifier of the other nominal ttal ‘daughter’, which is also coded with the nominative case. Consider the Korean data below and ascertain whether we can predict which grammatical relation the possessor nominal assumes in external possession, and also whether these predictions can be captured by means of an implicational hierarchy. (Note that there are other factors, semantic or otherwise, that also have a bearing on external possession in Korean, but the situation has been simplified for the purpose of this question.) Assume that in Korean the subject is coded with the nominative, the direct object with the accusative, and the indirect object with the dative. ()

a. kiho-uy ttal-i chakha-ta Keeho-GEN daughter-NOM good.natured-IND ‘Keehoo’s daughter is good-natured.’ b. kiho-ka ttal-i chakha-ta Keeho-NOM daughter-NOM good.natured-IND ‘Keeho’s daughter is good-natured.’

()

a. kiho-ka Yenghi-uy cip-eyse kel-e-nao-ass-ta Keeho-NOM Yonghee-GEN house-from walk-PF-exit-PST-IND ‘Keeho walked out of Yonghee’s place.’ b. *kiho-ka Yenghi-hantheyse cip-eyse Keeho-NOM Yonghee-from house-from kel-e-nao-ass-ta walk-PF-exit-PST-IND ‘Keeho walked out of Yonghee’s place.’

()

a. kiho-ka Yenghi-uy Keeho-NOM Yonghee-GEN ‘Keeho held Yonghee’s arm.’

phal-ul arm-ACC

cap-ass-ta hold-PST-IND

b. kiho-ka Yenghi-lul Keeho-NOM Yonghee-ACC ‘Keeho held Yonghee’s arm.’

phal-ul arm-ACC

cap-ass-ta hold-PST-IND

()

a. kiho-ka Yenghi-uy chinkwu-eykey Keeho-NOM Yonghee-GEN friend-DAT cwu-ess-ta give-PST-IND ‘Keeho gave Yonghee’s friend money.’



ton-ul money-ACC

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

GRAMMATICAL RELATIONS

b. *kiho-ka Yenghi-eykey chinkwu-eykey Keeho-NOM Yonghee-DAT friend-DAT cwu-ess-ta give-PST-IND ‘Keeho gave Yonghee’s friend money.’ ()

ton-ul money-ACC

a. kiho-uy atul-i kkochpyeng-ul Keeho-GEN son-NOM flower.vase-ACC ‘Keeho’s son broke a flower vase.’

kkayttuli-ess-ta break-PST-IND

b. kiho-ka atul-i kkochpyeng-ul Keeho-NOM son-NOM flower.vase-ACC ‘Keeho’s son broke a flower vase.’

kkayttuli-ess-ta break-PST-IND

4. The subject in English has a number of defining properties, e.g. subject– verb agreement. Find a language or two that you are familiar with and that has/have the subject relation, and compare it/them with English in terms of subjecthood properties. For a list of possible subjecthood properties, refer to Keenan (). Further reading Bickel, B. (). ‘Grammatical Relations Typology’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Corbett, G. G. (). Agreement. Cambridge: Cambridge University Press. Dixon, R. M. W. (). Ergativity. Cambridge: Cambridge University Press. Keenan, E. (). ‘Towards a Universal Definition of “Subject” ’, in C. Li (ed.), Subject and Topic. New York: Academic Press, –. Keenan, E. and Comrie, B. (). ‘Noun Phrase Accessibility and Universal Grammar’, Linguistic Inquiry : –. Siewierska, A. (). Person. Cambridge: Cambridge University Press.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 Valency-manipulating operations

13.1 Introduction



13.2 Change of argument structure with change of valency



13.3 Change of argument structure without change of valency



13.4 Concluding remarks



13.1 Introduction In §. and §., it was demonstrated how languages such as Dyirbal cope with grammatical constraints by taking advantage of antipassivization. For instance, Dyirbal allows relative clauses to be formed on S or P nominals, not on A nominals, but the Australian language circumvents this constraint on relativization by changing A to S by means of antipassivization. The outcome of grammatical operations such as antipassivization is a manipulation of what is known as valency (or valence in the US) (see Malchukov and Comrie  for valency patterns in selected languages; also Dixon and Aikhenvald ). The notion of valency is said to have been borrowed from chemistry into linguistics. In chemistry, the valency of a chemical element is defined as the capacity of that element for combining with a fixed number of atoms of another element, for instance, two hydrogen atoms combining with one atom of oxygen in the case of water (i.e. HO). The notion of valency, when applied to verbs, refers to the number of arguments required for a given verb to form sentences. For example, the verb die is regarded as having a valency of a single

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

argument. Verbs such as die are characterized as one-place (or monovalent) verbs. Verbs such as kill are two-place (or divalent) verbs, because they require two arguments. Verbs such as give are threeplace (or trivalent) verbs, because they require three arguments. The number of arguments required by the verb (or the predicate when referred to in relation to arguments) is specified in the argument structure of that verb. Argument structure also contains information concerning the relation between the verb and its argument(s). For instance, in the sentence The councillor hit the mayor, the verb hit and its arguments the councillor and the mayor have semantic relationships between them with the effect that the councillor is the agent that carried out the hitting, and the mayor is the patient that was affected by the actor’s hitting. The change of valency brought about by antipassivization, for instance, entails a reduction in the number of arguments, typically from two to one. In other words, antipassivization is a detransitivizing operation that converts a transitive clause (with two arguments) into what is, to all intents and purposes, an intransitive clause (with one argument). In the remainder of this chapter, we will examine a number of valency-manipulating operations, including antipassivization, by illustrating how argument structure can be altered with a concomitant change of valency. We will then describe how certain valencymanipulating operations may give rise to argument restructuring without a change of valency.

13.2 Change of argument structure with change of valency There are two main mechanisms through which the argument structure of the verb can be manipulated with a concomitant change of valency: argument reduction and argument addition (see Malchukov : –). In argument reduction, one of the arguments of the basic verb may be demoted to adjunct status. In argument addition, in contrast, an adjunct may be allowed to acquire argument status, or a new or additional argument may be introduced into the argument structure. There are five major valency-manipulating operations: passives, antipassives, noun incorporation, applicatives, and causatives. The first three belong to argument reduction, whereas the last two fall 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

under argument addition. Note, however, that noun incorporation and causatives may also involve argument reduction and addition concurrently (§.). 13.2.1 Passive Though widely discussed in linguistics, the passive is something that not all languages seem to have. For instance, Siewierska () reports that in her sample of  languages, .% ( languages) have passives while .% ( languages) lack passives. Geographically speaking, languages with passives are most commonly found in Eurasia and Africa, and regularly found in the Americas—notably in North America—while they are less frequently attested in South East Asia and the Pacific; few Australian languages are known to have passives while no languages in New Guinea seem to have passives (Siewierska ). It is possible that the majority of the world’s languages may not have passives. The passive is typically found in languages with the nominative–accusative alignment system (i.e. A = S 6¼ P; §..). In the passive clause, the A nominal of the active transitive clause is demoted to an adjunct ‘agent’ phrase—frequently marked by an oblique adposition or non-core case, e.g. instrumental, locative, genitive—while the P nominal of the active transitive clause is promoted to S, with the effect that the total number of arguments is reduced from two in the active clause to one in the corresponding passive clause.1 Q’eqchi’ and Korean provide good examples of the passive. ()

Q’eqchi’ (Mayan: Guatemala) a. x-at-in-bok (lia̱n) TNS-–-call (I) ‘I called you.’ b. x-at-bok-e’ (la̱at) (in-ban) TNS--call-PASS (you) (I-by) ‘You were called by me.’

The label ‘agent’ phrase is misleading in that the A of the active clause can bear semantic roles other than agent. However, this is the standard term used in the literature. 1



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

() Korean (isolate: Korea) a. kakey cwuin-i totwuk-ul cap-ess-ta shop keeper-NOM thief-ACC catch-PST-IND ‘The shopkeeper caught the thief.’ b. totwuk-i (kakey cwuin-eke) cap-hi-ess-ta thief-NOM (shop keeper-by) catch-PASS-PST-IND ‘The thief was caught (by the shopkeeper).’ The verb in the active clause (a) carries verbal prefixes for both A and P while the verb in the passive clause (b) only registers the presence of S (corresponding to the P of the active clause) since the A nominal of the basic transitive clause is now demoted to an adjunct nominal, coded with an oblique case suffix -ban (hence ineligible for registration in the verb). Similarly, in (a) the active clause, with two arguments (i.e. A and P), is transitive. In (b), in contrast, the passive clause is intransitive; the P nominal of the active clause is now in S function, and the A nominal of the active clause has been reduced to an adjunct nominal, coded with the oblique postposition. This change of valency is signalled in the verb morphology by a passive suffix, i.e. -e’ in Q’eqchi’ and -hi in Korean, and also by the fact that the agent phrase is optional—that is, it can be left out without causing ungrammaticality. Languages such as German and Latin make use of a periphrastic element—an auxiliary verb—in the verb phrase instead of a passive affix, as in: () German (Germanic; Indo-European: Germany) Hans wurde (von seinem Vater) bestraft Hans became (by his father) punished ‘Hans was punished (by his father).’ () Latin (Italic; Indo-European) Darius (ab Alexandro) victus est Darius (by Alexandro) conquered is ‘Darius was conquered (by Alexander).’ Cross-linguistically, auxiliary verbs used in passives tend to be verbs of being or becoming (‘be’ or ‘become’), verbs of reception (‘get’, ‘receive’), verbs of motion (‘go’, ‘come’), and verbs of experiencing (‘suffer’, ‘touch’). The use of auxiliary verbs in passives is commonly found in (modern) Indo-European languages. The agent phrases in (–) are all enclosed within parentheses to indicate that they can be optionally left out. This is not surprising at all 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

because the agent phrase is an adjunct, not required to form a complete sentence. There are, however, languages that do not tolerate the presence of the agent phrase its adjunct status notwithstanding. In Kutenai and Ulcha, for instance, the agent phrase can never be expressed in the passive. ()

Kutenai (isolate: Canada and US) la ¢inamnal-il-ni ʔinlak ʔa·kitlanamis back take-PASS-IND chicken.hawk tent ‘Chicken Hawk was taken back to the tent.’

()

Ulcha (Tungusic; Altaic: Russia) ti dūse-we hōn-da ta-wuri DEM tiger-ACC how-Q do-PASS ‘What’s to be done about that tiger?’

Note that Ulcha illustrates a further variation on the passive in that in some languages—e.g. Nanai (Tungusic; Altaic: Russia), Finnish (Finnic; Uralic: Finland), and Ute (Numic; Uto-Aztecan: US)—the passive may involve only demotion of the A nominal of the active clause (to nothing in this case) with the P nominal remaining intact or retaining its P-coding. Thus, in () the nominal dūse is still coded with the accusative case, which is used for the coding of P, and the verb is suffixed with the passive morpheme -wuri. To wit, the passive clause in (), with one argument, is intransitive. Passives involving only A demotion are known as impersonal passives. Impersonal in the sense that there is no overt lexical subject. Impersonal passives thus contrast with personal passives, which involve not just A demotion but also P promotion (as exemplified by Q’eqchi’, Korean, Latin, German, and Kutenai). Note that personal passives have overt lexical subjects because they involve promotion of P to S. Kannada illustrates well this contrast between personal and impersonal passives in one and the same language. Compare: ()

Kannada (Southern Dravidian; Dravidian: India) a. i: nirNayav-annu khaNDisala:yitu this resolution-ACC denounce.INFV.become-SG.N ‘This resolution was denounced.’ b. huDugar-inda ba:vuTa ha:risalpaTTitu boys-INST flag-NOM fly.INFV.PASS.PST.SG.N ‘The flag was flown by the boys.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

The example in (a) is an impersonal passive in that P remains as P, while A is left unexpressed, while the example in (b) is a personal passive, with P promoted to S (as coded with the nominative case) and A reduced to an adjunct phrase (as coded with the instrumental case). Furthermore, the verb in the impersonal passive agrees with nothing in the clause—it must always be in the third person singular neuter— while the verb in the personal passive agrees with the derived S (or the erstwhile P). Some other languages may form impersonal passives on the basis of intransitive clauses, with only one argument, namely S.2 This type of passive is found in Classical Greek (Hellenic; IndoEuropean: Greece), Dutch, German, Hindi (Indic; Indo-European: India), Kannada, Latin, Lithuanian (Baltic; Indo-European: Lithuania), Shona (Bantoid; Niger-Congo: Zimbabwe), Spanish (Romance; Indo-European: Spain), Turkish, and Tarahumara (Tarahumaran; Uto-Aztecan: Mexico). Examples of impersonal passives formed on intransitive clauses come from Dutch and Turkish.3 () Dutch (Germanic; Indo-European: Netherlands) Er wordt (door de jongens) gefloten there became (by the young.men) whistled ‘There was some whistling (by the young men).’ () Turkish (Turkic; Altaic: Turkey) Ankara-ya gid-il-di Ankara-to go-PASS-PST ‘It was gone to Ankara.’ Dutch allows the agent phrase to be optionally expressed in impersonal passives (as indicated by the enclosing parentheses), while Turkish does not express the agent phrase at all in the impersonal passive. Note that Dutch and Turkish have personal passives as well. Other languages with both personal and impersonal passives include German, Hindi, Kannada, Lithuanian, and Spanish. In languages with both personal and impersonal passives, the general tendency is said to be that if the agent phrase is permitted in personal passives, it also appears in impersonal passives. Siewierska () reports that there are also Incidentally, Keenan and Dryer (: ) propose an implicational statement: if a language has passives of intransitive verbs, it also has passives of transitive verbs. 3 While the so-called dummy er has been claimed by some grammarians to be the subject of (), it is not lexical; () lacks an overt lexical subject. 2



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

languages that have only impersonal passives, e.g. Kolami (Central Dravidian; Dravidian: India), Ute, and Zuni (isolate: US). Passives formed on ditransitive clauses raise an interesting question: which of the two non-A arguments, i.e. T(heme) and R(ecipient), is promoted to S in the passive (see §. on T and R in ditransitive clauses). Languages such as French promote the T argument, not the R argument, to S (). In contrast, languages such as Yindjibarndi promote the R argument, not the T argument, to S (). And yet, languages such as Kinyarwanda are flexible enough to allow either the T or the R argument to appear as S in the passive (). () French (Romance; Indo-European: France) a. Jean a donné le livre à Pierre Jean has given the book to Pierre ‘Jean gave the book to Pierre.’ b. Le livre a été donné à Pierre the book has been given to Pierre ‘The book was given to Pierre.’ c. *Pierre a été donné le livre Pierre has been given the book ‘Pierre was given the book.’ () Yindjibarndi (Western Pama-Nyungan; Pama-Nyungan: Australia) a. ngaara yungku-nha ngayu murla-yi man give-PST SG.OBJ meat-OBJ ‘A man gave me the meat.’ b. ngayi yungku-nguli-nha murla-yi ngaarta-lu I give-PASS-PST meat.OBJ man-INST ‘I was given the meat by a man.’ c. *murla yungku-nguli-nha ngayu ngaarta-lu meat give-PASS-PST SG.OBJ man-INST ‘The meat was given to me by a man.’ () Kinyarwanda (Bantoid; Niger-Congo: Rwanda) a. umugabo y-a-haa-ye umugóre igitabo man he-PST-give-ASP woman book ‘The man gave the woman the book.’ b. umugóre y-a-haa-w-e igitabo n-ûmugabo woman she-PST-give-PASS-ASP book by-man ‘The woman was given the book by the man.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

c. igitabo cy-a-haa-w-e umugóre n-ûmugabo book it-PST-give-PASS-ASP woman by-man ‘The book was given to the woman by the man.’ So far, we have shown that it is arguments, i.e. P, T, and R, that are promoted to S in (personal) passives. There are, however, languages that have a highly ‘developed’ passivizing mechanism, whereby not only arguments but also adjuncts can be promoted directly to S (e.g. Keenan and Dryer : –; but cf. Siewierska : –,  for non-passive analyses). These languages typically come from the Austronesian family. Malagasy provides examples of such a highly developed passivizing mechanism. Note that the basic word order in Malagasy is VOS. ()

Malagasy (Barito; Austronesian: Madagascar) a. nanasa ny lamba amin-ny savony Rasoa washed the clothes with-the soap Rasoa ‘Rasoa washed the clothes with the soap.’ b. nosasan-dRasoa amin-ny savony ny lamba washed-by.Rasoa with-the soap the clothes ‘The clothes were washed with the soap by Rasoa.’ c. nanasan-dRasoa ny lamba ny savony washed.with-by.Rasoa the clothes the soap ‘The soap was washed the clothes with by Rasoa.’

The verb forms in (b) and (c) are different, and these different verbal forms indicate what has been promoted to S in the passive, whether it is an argument (e.g. P in (b)) or an adjunct (e.g. instrumental in (c)). 13.2.2 Antipassive The antipassive is often regarded as the counterpart of the passive in languages with ergative–absolutive case alignment. Polinsky (a), however, does not detect any ‘principled’ correlation between the antipassive and ergative–absolutive alignment; there are nominative– accusative languages that have antipassives, e.g. Acoma (Keresan: US), Chamorro (Austronesian: Guam), Lango (Nilotic; Nilo-Saharan: Uganda), Lavukaleve (Solomons East Papuan: Solomon Islands), and Sanuma (Yanomam: Brazil and Venezuela). Moreover, it should not be 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

assumed that the passive is never found in languages with ergative– absolutive case alignment; both the passive and the antipassive may occur in the same language with ergative–absolutive case alignment (e.g. Basque (isolate: France and Spain), Greenlandic Inuktitut (Eskimo; Eskimo-Aleut: Greenland), and Mam (Mayan: Guatemala)). For instance, Mam has one antipassive construction, and at least four different types of passive. The antipassive does not seem to be commonly attested in the world’s languages. For instance, only  languages (.%) in Polinsky’s (a) sample of  languages are reported to have antipassives. Moreover, no particular language families or geographical areas stand out as strongholds of the antipassive (Polinsky a). In the antipassive, the A argument of the basic transitive clause remains the (sole) argument of the corresponding antipassive (albeit now functioning as S and coded with the absolutive case), with the P argument demoted to adjunct status or optionally omitted. If and when expressed, however, the erstwhile P argument is coded accordingly with the dative or oblique case. This suggests that the antipassive is as much intransitive or, specifically, detransitivized as is the passive: a decrease in valency from two (i.e. A and P) to one (i.e. S) argument. The verb in the antipassive also bears an affix as an indication of this valency decrease. Dyirbal provides an example of the antipassive (b), as opposed to the corresponding transitive clause (a): () Dyirbal (Northern Pama-Nyungan; Pama-Nyungan: Australia) y baŋgul yaᶉa-ŋgu balga-n a. balan d ugumbil CLF.ABS woman-ABS CLF.ERG man-ERG hit-NFUT ‘The man is hitting the woman.’ y y b. bayi yaᶉa bagun d ugumbil-gu balgal-ŋa-n u CLF.ABS man-ABS CLF.DAT woman-DAT hit-ANTIP-NFUT ‘The man is hitting the woman.’ A similar situation is found in Greenlandic Inuktitut. In this language, as with Dyirbal, A is coded with the ergative case, and P with the absolutive case in the basic transitive clause (a). In the antipassive (b), the erstwhile A argument is coded with the absolutive case, with the erstwhile P argument demoted to peripheral status, coded with the instrumental case. The verb accordingly registers a change of transitivity by means of the antipassive suffix -NNig. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

()

Greenlandic Inuktitut (Eskimo; Eskimo-Aleut: Greenland) a. arna-p niqi-Ø niri-vaa woman-ERG meat-ABS eat-IND.SG ‘The woman ate the meat.’ b. arnaq-Ø niqi-mik niri-NNig-puq woman-ABS meat-INST eat-ANTIP-IND.SG ‘The woman ate some of the meat.’

In some languages, the demoted P must be completely eliminated from the antipassive clause, just as the demoted A must be in some languages with the passive (§..). Bandjalang is one such language, as illustrated in (). ()

Bandjalang (South-Eastern Pama-Nyungan; Pama-Nyungan: Australia) a. mali-yu ᶁa:ᶁam-bu mala-Ø bulan-Ø ᶁa-ila that-ERG child-ERG that-ABS meat-ABS eat-PRES ‘The child is eating the meat.’ b. mala-Ø ᶁa:ᶁam-Ø ᶁa-le-ila that-ABS child-ABS eat-ANTIP-PRES ‘The child is eating.’

Polinsky (a) reports that among  antipassive-possessing languages in her sample,  behave like Dyirbal and Greenlandic Inuktitut in expressing the erstwhile P argument as an oblique nominal, while  behave like Bandjalang in that they require the erstwhile P argument to be unexpressed. The antipassive typically has a semantic function of expressing an uncompleted or habitual activity or a nonspecific P. In Kabardian, for instance, the antipassive is used to express incompleteness of the action and thus a partially affected P. ()

Kabardian (North-West Caucasian: Russia) a. ħe-m qẃɨpŝħe-r je-dzaq’e dog-ERG bone-ABS bite ‘The dog bites the bone (through to the marrow).’ b. ħe-r qẃɨpŝħe-m je-w-dzaq’e dog-ABS bone-ERG ANTIP-bite ‘The dog is gnawing the bone.’

In Bezhta (Avar-Andic-Tsezic; Nakh-Daghestanian: Russia), the antipassive is used to express the meaning of potential mood, in contrast 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

with the corresponding transitive clause. Cooreman () provides a useful overview of semantic/pragmatic functions of the antipassive based on a sample of nineteen ‘ergative’ languages. She (: , ) also attempts to offer a general definition of the various semantic and/or pragmatic functions by reference to difficulty with which P of a proposition is or can be clearly and uniquely identified. 13.2.3 Noun incorporation Noun incorporation is another type of valency-manipulating operation found in some languages of the world, whereby one of the arguments of the clause is incorporated into the verb, thus losing its status as an argument of the clause (Mithun , ; Gerdts ; Massam ; also see §.). It has been observed that between S, A, and P it is P that is most likely to be incorporated into the verb, and that, if a language incorporates one argument in addition to P, then it will be S, not A. Thus, A is reported to be most resistant to noun incorporation. This tendency seems to be true of noun-incorporating languages regardless of their case alignment. When one of the arguments of the clause is incorporated into the verb, the resulting larger verb stem may function as a single word, undergoing all word-internal phonological processes. In Chukchi, for instance, vowels of incorporated nouns must participate in word-internal vowel harmony in conjunction with host verbs (e.g. kupre in (a) vs kopra in (b)). () Chukchi (Northern Chukotko-Kamchatkan; Kamchatkan: Russia) a. tumg-e na-ntəwat-ən kupre-n friends-ERG set-TR net-ABS ‘The friends set the net.’ b. tumg-ət kopra-ntəwat-gʔat friend-NOM net-set-INTR ‘The friends set nets.’

Chukotko-

Note that with P incorporated into the verb the A argument of the basic transitive clause in (a) loses its ergative marking and is instead coded with the nominative case (i.e. S) in (b). Moreover, noun incorporation causes the verb to shed its transitive suffix -ən in favour of the intransitive suffix -gʔat. In some languages, the incorporated noun and the verb may simply be juxtaposed to form a syntactic unit, within which they remain 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

separate words phonologically but they behave grammatically as a tightly bonded unit.4 This simple juxtaposition of the incorporated noun and the verb is found very commonly in Oceanic languages. In Kosraean, for instance, the completive aspect suffix -læ must immediately follow the verb in the basic transitive clause but in the case of noun incorporation it follows the entire incorporated-noun-verb complex. ()

Kosraean (Oceanic; Austronesian: Micronesia) a. nga ɔl-læ nuknuk ɛ I wash-COMPL clothes the ‘I finished washing the clothes.’ b. nga owo nuknuk læ I wash clothes COMPL ‘I finished washing clothes.’

When incorporated into the verb, the P nominal loses its independent argument status. This should entail a change in transitivity (read: a change of valency). For instance, the basic clause in (a) is transitive while the noun-incorporation clause in (b) is intransitive, as clearly indicated by the presence of the intransitive suffix in the verb complex. However, this change in transitivity is not attested in all languages with noun incorporation. For instance, in Southern Tiwa (Kiowa-Tanoan: US), the incorporated P nominal continues to control object agreement while in Rembarnga (Gunwinyguan: Australia) the A nominal, in spite of the P nominal incorporated into the verb, continues to be coded with the ergative case, which is used for A in transitive clauses. In §., we will revisit noun incorporation as an operation that manipulates argument structure without a change of valency. Incorporated Ps tend to be non-referential, inanimate, and thus nonindividuated. This is evident from (b) and (b) for example. Thus, when non-individuated Ps are incorporated into the verb, the derived construction is ‘generally used to describe activities or events whose patients are neither specific nor countable, e.g. habitual, ongoing, or projected activities; those done by several people together; or those directed at non-specific [unspecified] part of a mass’ (Mithun : ). This is why the incorporated-noun-verb compound tends to denote the name of institutionalized activity or state. Compare () and (). 4

This juxtaposition of the noun and the verb is also known as noun stripping (Miner , ).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

() John was out deer-hunting. () John hunted deer. In some languages, noun incorporation further plays an important role in management of information distribution. Thus, it is also ‘used to background known [old] or incidental information within portions of discourse’ (Mithun : ). This type of function is associated typically with polysynthetic languages, wherein the verb carries most of the information in conjunction with obligatory pronominal affixes. Separate nominals will ‘sidetrack the attention of the listener’ when they do not represent informationally significant new participants (Mithun : ). In such cases, noun incorporation is called for with the effect that nominals expressing old or incidental information are incorporated into the verb. The passive is normally found in languages with the nominative– accusative system, while the antipassive tends to be associated with languages which are organized on the ergative–absolutive basis not just in case alignment but also possibly in other grammatical domains (but Polinsky a; §..). Noun incorporation, in contrast, does not seem to correlate strongly with any particular type of case alignment system. For instance, Mithun (: ) points out that noun incorporation ‘operates equally in nominative/accusative, ergative/absolutive, and agent/patient [= active–stative] systems’ although Palmer (: ) suggests that it is ‘a feature found mostly with ergative[–absolutive] systems’. 13.2.4 Applicative So far, we have discussed valency-manipulating operations that give rise to argument reduction. In many languages, it is also possible to do the opposite, that is, to increase the valency of the basic verb by promoting adjunct nominals (e.g. beneficiary, instrumental, locative, comitative) to argument status or to P in particular. In order to signal such promotion of an adjunct nominal to P, the verb is typically marked by a so-called applicative affix; the construction as a whole is referred to commonly as the applicative construction (see Peterson  for a cross-linguistic survey; also Polinsky b). In Kinyarwanda, for instance, instrumental adjuncts can be promoted optionally to argument or P status. Compare the following two sentences: 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

()

Kinyarwanda (Bantoid; Niger-Congo: Rwanda) a. úmwáalímu a-ra-andik-a íbárúwa n’í-íkárámu teacher he-PRES-write-ASP letter with-pen ‘The teacher is writing a letter with the pen.’ b. úmwáalímu a-ra-andik-iish-a íbárúwa íkárámu teacher he-PRES-write-APPL-ASP letter pen ‘The teacher is writing a letter with the pen.’

The instrumental adjunct in (a) is promoted to P in (b), evidence for which can be adduced from the fact that the instrumental nominal, which is coded with the instrumental prefix in (a), appears as a bare nominal (i.e. no case coding) in (b). This promotion of the erstwhile adjunct to P must also be registered on the verb by the applicative suffix-iish, as in (b). Both the old P (aka the basic object) and the newly created P (aka the applied object) in (b) exhibit grammatical properties of P. For instance, both can further be promoted to S in the passive (e.g. () in §..; but see below). This suggests that the valency of the applicative verb in (b) is greater by one argument than that of the basic verb in (a). In (a), there are two arguments (i.e. teacher and letter) and one adjunct (i.e. pen). In (b), in contrast, there are three arguments (i.e. teacher, letter, and pen). Moreover, the argument structure of the verb in (a) specifies the relation between the agent and the product, and the argument structure of the applicative verb in (b) the relation between the agent, the product, and the instrument. To put it differently, (a), without the instrumental phrase n’í-íkárámu, will remain fully grammatical, while (b), with íkárámu deleted, will be rendered ungrammatical. Promotion of adjunct nominals to Ps is not confined to transitive verbs but also occurs in the context of intransitive verbs, as can be seen in the following examples from Kalkatungu and Indonesian. ()

Kalkatungu (Northern Pama-Nyungan; Australia) a. kalpin nuu-mi irratyi-thi man lie-FUT girl-LOC ‘The man will lie with the girl.’ b. kalpin-tu nuu-nti-mi irratyi-Ø man-ERG lie-LOC-FUT girl-ABS ‘The man will lay the girl.’ 

Pama-Nyungan:

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

() Indonesian (Malayo-Sumbawan; Austronesian: Indonesia) a. Ali duduk diatas bangku itu Ali sit on bench that ‘Ali sits on the bench.’ b. Ali men-duduk-i bangku itu Ali TR-sit-LOC bench that ‘Ali occupies the bench.’ In (b), what is a locative-coded (comitative) nominal in (a) is promoted to P, which is, in fact, indicated by the fact that the verb is marked with the locative suffix -nti, and also by the fact that the S nominal and the locative-coded nominal in (a) appear with the ergative case (i.e. A) and the absolutive case (i.e. P), respectively, in (b). To put it differently, the promotion of the locative(–comitative) adjunct to argument status has resulted in the intransitive verb in (a) changing to a(n applicative) transitive verb, that is, valence increasing. A similar comment can be made of the Indonesian pair of sentences in (). The locative nominal in (a) is preceded by the locative preposition diatas but, when promoted to P in (b), it loses the locative preposition. This promotion, in turn, is reflected on the verb by the locative suffix -i. In addition, the transitive marker men- prefixed to the verb indicates the fact that the applicative verb in (b) has two arguments. In fact, it is an interesting exercise to find out how applicative constructions are distributed in terms of the transitivity of the verb base—that is, the basic verb that applicative affixes attach themselves to. There are languages that form applicatives on not only transitive but also intransitive verb bases (e.g. Diyari (Central Pama-Nyungan; PamaNyungan: Australia), Hakha Lai (Kuki-Chin; Sino-Tibetan: Myanmar), Indonesian, Kalkatungu, Kinyarwanda, Lakhota (Siouan: US), Paumarí (Arauan: Brazil), Swahili (Bantoid; Niger-Congo: Tanzania), Taba (South Halmahera-West New Guinea; Austronesian: Indonesia)). There are also languages that restrict applicatives to transitive verb bases only (e.g. Abkhaz (North-West Caucasian: Georgia), Chaha (Semitic; Afro-Asiatic: Ethiopia), Taiap (Gapun: Papua New Guinea), Tzotzil (Mayan: Mexico)). In contrast, there seem to be few languages forming applicatives on intransitive verb bases only. Indeed Polinsky’s (b) sample of  languages shows the distribution of applicatives in terms of the transitivity of the verb base, as in Table .. Note that 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

Table 13.1 Distribution of applicatives in terms of verb base Verb base

Number of languages

Both transitive and intransitive



Transitive only



Intransitive only

 

Total

the total number at the bottom of the table comes to  because as many as  languages in Polinsky’s sample do not have applicatives. Incidentally, the two languages restricting the use of applicatives to intransitive verb bases in Table . are Fijian (Oceanic; Austronesian: Fiji) and Wambaya (Wambayan; Mirndi: Australia). The distribution in Table ., in turn, leads Polinsky (b) to formulate an implicational statement that if a language has applicatives formed from intransitive verb bases, it also tends to form applicatives from transitive verb bases. In the standard analysis of promotion that has so far been presented in the present section the following Indonesian sentences illustrate a straightforward promotion of the beneficiary nominal. The beneficiary nominal guru in (a) is promoted to P in (b), whereby it loses its benefactive preposition untuk, and accordingly the verb is coded with the benefactive suffix -kan. Moreover, the promoted beneficiary nominal is placed immediately to the right of the verb in (b), with the erstwhile P pintu moved further away from the verb. ()

Indonesian (Malayo-Sumbawan; Austronesian: Indonesia) a. Ali mem-buka pintu untuk guru Ali TR-open door for teacher ‘Ali opens the door for the teacher.’ b. Ali mem-buka-kan guru pintu Ali TR-open-BEN teacher door ‘Ali opens the door for the teacher.’

Dryer (), in contrast, treats (b) as the basic clause, and (a) as a manipulated version of (b). His argument is based crucially on the observation that in contrast to the widely known distinction between direct object and indirect object the recipient or beneficiary nominal of the ditransitive clause in many languages behaves grammatically like 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

the P nominal of the monotransitive clause (i.e. secundative alignment; for further discussion on P-alignment, including secundative alignment, see §.). For instance, while it agrees with the direct object nominal in the monotransitive clause, the verb agrees with the recipient or the beneficiary nominal, not the direct object nominal, in the ditransitive clause. In Palauan, for example, the verbal suffix -tᶒrir stands for the direct object nominal in the monotransitive clause in (a) but for the recipient nominal in the ditransitive clause in (b). () Palauan (Austronesian: Palau) a. a Droteo a cholᶒbᶒdᶒ-tᶒrir a rᶒ-ngalᶒk DET Droteo DET hit-PL DET PL-child ‘Droteo is going to hit the children.’ b. ak m-il-s-tᶒrir a rᶒ-sᶒchᶒl-ik a hong I V-PST-give-PL DET PL-friend-my DET book ‘I gave my friends a book.’ Dryer (: –) cites a number of unrelated languages that behave like Palauan in this respect, e.g. Huichol (Corachol; Uto-Aztecan: Mexico), Khasi (Khasian; Austro-Asiatic: India), Lahu (Burmese-Lolo; Sino-Tibetan: China, Thailand, and Myanmar), and Nez Perce (Sahaptian; Penutian: US). In these languages, the recipient or beneficiary nominal (= R) of the ditransitive clause behaves like the direct object (= P) nominal of the monotransitive clause but these two in turn behave differently from the direct object (= T) nominal of the ditransitive clause. Dryer refers to the grouping of the first two as the primary object and the direct object nominal of the ditransitive clause as the secondary object. This can be schematized as in Figure .. In other words, the Indonesian example in (b) contains the sequence of the primary object (i.e. guru) and the secondary object DITRANSITIVE:

Direct Object

MONOTRANSITIVE:

Direct Object

Primary Object

Recipient/Beneficiary

=

Secondary Object =

Figure 13.1 Primary object vs secondary object based on Dryer ()



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

(i.e. pintu) in that order, whereas in the example in (a) the secondary object (i.e. pintu) is promoted to primary object status, displacing the original primary object. Dryer’s distinction between primary object and secondary object has interesting implications for a general theory of grammatical relations. First, within the traditional distinction between direct object and indirect object, the indirect object nominal indeed is not like the direct object of the monotransitive clause but is treated like an oblique nominal. As has been shown, however, this does not seem to be appropriate for languages such as Palauan, Huichol, and Khasi in that the recipient or beneficiary nominal—which would be treated as indirect object in the standard analysis—does behave like the direct object nominal of the monotransitive clause. Thus, Dryer (: ) points out that this situation is analogous to the theoretical problem which the grammatical relations of ergative and absolutive pose for the traditional subject–object distinction. Second, Dryer (: ) points out that, just as ergativity is not a property of languages but of rules, objectivity also pertains to rules, not to languages (see §.). This means that the traditional direct object–indirect object distinction and the new primary object–secondary object distinction may also apply to different rules of the same language. Dryer refers to this phenomenon as splitobjectivity, analogous to split-ergativity. In Southern Tiwa (KiowaTanoan: US), for instance, various rules, e.g. passive, verb agreement, are indeed sensitive to the primary object–secondary object distinction, while noun incorporation is based on the direct object–indirect object distinction with the effect that only direct object can incorporate into the verb. Other languages which Dryer (: –) thinks exhibit split-objectivity include Mohawk (Northern Iroquoian; Iroquoian: Canada and US), Tzotzil (Mayan: Mexico), and Yindjibarndi (Western Pama-Nyungan; Pama-Nyungan: Australia). Promotion of an adjunct nominal to P tends to be associated with the ‘promotee’ acquiring some sense of being more affected. For instance, what is a location in (a) is now effectively reconceptualized as a patient in (b). In other words, the semantic effect of promotion is to emphasize the notion of affectedness. In contrast, promotion involving beneficiary or recipient nominals does not lead to any semantic reinterpretation of these nominals as the patient (also see Peterson : – on other semantic effects of applicatives). The reason for this may be along the lines of Dryer’s (: ) suggestion that the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

primary object–secondary object distinction is based on the discourse/ pragmatic function of these grammatical relations: being generally human, definite, or individuated in the sense of Hopper and Thompson (; §..), and often first or second person, the recipient/beneficiary nominal is more topical than the direct object nominal, which is generally non-human, indefinite, or non-individuated and ‘almost invariably third person’. Following Givón (), Dryer (: ) argues, therefore, that the recipient/beneficiary nominal is a ‘secondary clausal topic’, after the subject nominal (i.e. the principal clausal topic). The direct object–indirect object distinction, on the other hand, is more closely related to semantic roles, with direct object corresponding to theme/patient, and indirect object to recipient/beneficiary. Thus, the promotion of the recipient/beneficiary nominal to P gives rise to an increment in topicality of that nominal. In view of this, Dryer (: –) explains why in languages such as Palauan () the recipient/ beneficiary nominal takes priority for verb agreement over the patient/ theme nominal in the ditransitive clause: the former, due to its inherent properties, is more topical than the latter. This kind of topicality-based explanation is pursued by Peterson (: ch. ), who investigates the relationship between high topicality status and the use of applicatives in Hakha Lai and Wolof; his study reveals a strong correlation between the applied object and high topicality status, albeit more so in Hakha Lai than in Wolof, and especially when the applied object is animate, as opposed to inanimate. The most common semantic role coded by the applied object in the applicative construction is that of beneficiary. For instance, Polinsky (b) reports that in over % ( languages) of her  applicative languages the applied object expresses beneficiary (and possibly maleficiary and recipient too) either exclusively or in conjunction with other roles, and in twelve languages the applied object expresses nonbeneficiary roles only. The distribution of non-beneficiary roles in the applied object, reported in Polinsky (b), is summarized in Table .. While recognizing a greater range of non-beneficiary roles in his sub-sample of  applicative languages, Peterson (: –) also identifies beneficiary as the most commonly coded by the applied object (i.e. over %). Moreover, Peterson () explores possible correlations among different semantic roles of the applied object, although he () admits that his correlations must be taken with a great deal of caution because of the small size of his sample. For 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

Table 13.2 Non-beneficiary roles of applied objects Roles

Number of languages

Instrumental



Locative



Instrumental & Locative



No other roles (= only Beneficiary)



Total



instance, while there seems to be a statistically significant co-occurrence relationship between instrumental and locative roles, there is a negative co-occurrence relationship between instrumental and beneficiary roles—that is to say, the applied object is more likely to code instrumental role if it does not code beneficiary role than if it does. Peterson (: ch. ) makes a further attempt to ascertain whether the applicative correlates with other structural properties, including word order, case alignment, and morphological complexity among others. Once again, the size of his applicative sample is very small, and for that reason his findings must be regarded as tentative. For instance, while word order does not show any particular correlation with the presence or absence of applicatives, there are some interesting correlations between case alignment and applicatives. In particular, the presence of applicatives seems to correlate with ergative–absolutive alignment. This correlation becomes much stronger if the languages of Australia and Africa are removed from the sample.5 One theoretical question that has emerged from research on applicatives concerns the grammatical status of the basic object and the applied object. In particular, are the basic and the applied object on a par with each other in terms of object (or P) status? Consider again the Kinyarwanda applicative sentence in (b), in which the basic and the applied object are both bare nominals, each of which can be promoted Peterson’s (: –) reasoning for this is that Australia and Africa are rich in languages with and without ergative–absolutive alignment, respectively, and also in languages without and with applicatives, respectively. Thus, the ubiquitous presence of ergative–absolutive alignment in Australian languages may create the impression that ergative–absolutive alignment is common in languages without applicatives and the absence of ergative–absolutive alignment in African languages may give rise to the impression that ergative–absolutive alignment is uncommon in languages with applicatives (see §. for typological bias in language sampling). 5



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 2 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H C H A N G E O F V A L E N C Y

further to subject (i.e. S) in the passive. Gary and Keenan () take the view that (b) has two direct objects (i.e. two Ps). Perlmutter and Postal (), in contrast, argue that (b) does not have two direct objects, but rather one direct object and one indirect object (i.e. one P). While the question as to which of the two, the basic or the applied object, is really P needs to be answered on a language-by-language basis, the correct answer seems to be somewhere in between the two opposing positions. For instance, Peterson () provides a cross-linguistic survey of the distribution of direct object (or P) properties in applicative constructions. His tentative conclusion is that when the applied object in applicative constructions codes semantic roles other than beneficiary/recipient (i.e. instrumental, locative, comitative, circumstantial), it tends not to display all direct object properties with the basic object claiming some direct object properties. If the coded semantic role is beneficiary/recipient, however, the tendency is that the applied object possesses direct object properties, with the basic object displaying direct object properties only in a few attested instances. Lastly, Polinsky (b) identifies Africa (mainly Bantu languages), the western Pacific region (Austronesian languages), and North and Meso-America (Salishan, Mayan, and Uto-Aztecan languages) as three geographical areas where the applicative is commonly attested. 13.2.5 Causative In common with the applicative, the causative is basically a valencymanipulating operation that adds an argument to the argument structure of the basic verb. However, it does so not by converting an existing adjunct into an argument but by introducing a new argument into the argument structure (see Song , a and b for a cross-linguistic survey of causative constructions; also Dixon ). This newly introduced argument is the causer argument. Thus, if the basic verb is a one-place verb, the corresponding causative verb will be a two-place verb (the sole argument of the basic verb and the newly added causer argument). If the basic verb is a two-place verb, the corresponding causative verb will be a three-place verb (the two arguments of the basic verb and the newly added causer argument). In so-called morphological causativization, the introduction of the causer argument is registered on the verb by a so-called causative affix—the output of morphological causativization is known as the morphological causative (verb). This is illustrated 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

by the following pairs of Tuvan sentences. For instance, as opposed to the basic verb in (a), the argument structure of the causative verb in (b), with the causative suffix -ur, contains one additional argument, namely the causer argument, ašak. The same comment applies to the remaining basic–causative pairs in () mutatis mutandis. ()

Tuvan (Turkic; Altaic: Russia and Mongolia) a. ool doŋ-gan boy freeze-PST ‘The boy froze.’ b. ašak ool-du doŋ-ur-gan old.man boy-ACC freeze-CAUS-PST ‘The old man made the boy freeze.’ c. ašak ool-du ette-en old.man boy-ACC hit-PST ‘The old man hit the boy.’ d. Bajïr ašak-ka ool-du ette-t-ken Bajïr old.man-DAT boy-ACC hit-CAUS-PST ‘Bajïr made the old man hit the boy.’ e. Bajïr ool-ga bižek-ti ber-gen Bajïr boy-DAT knife-ACC give-PST ‘Bajïr gave the knife to the boy.’ f. ašak Bajïr-dan ool-ga bižek-ti ber-gis-ken old.man Bajïr-ABL boy-DAT knife-ACC give-CAUS-PST ‘The old man made Bajïr give the knife to the boy.’

Thus, the causative is a case of argument addition, with the status of the argument(s) of the basic verb remaining intact. This may indeed be true of one-place verbs; morphological causativization only increases the valency of the basic one-place verb from one to two arguments (b). Morphological causativization of two- or three-place verbs gives rise to three or four arguments, as in (d) or (f), respectively. However, morphological causativization of two- or three-place, as opposed to one-place, verbs may result in more arguments than many languages may be able to cope with—regardless of whether verbs are basic or causativized. Indeed, there is a great deal of cross-linguistic variation in morphological causativization of two- or three-place verbs (also see §.). For instance, it has long been known that languages tend to 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 3 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H O U T C H A N G E O F V A L E N C Y

apply causative affixes to intransitive verbs more often than to transitive verbs or to transitive verbs more often than to ditransitive verbs. To put it differently, transitive verbs are harder to causativize morphologically than intransitive verbs; and ditransitive verbs are harder to causativize morphologically than transitive verbs. There are languages that derive causative verbs from basic one-place verbs only. Languages such as Lamang (Biu-Mandara; Afro-Asiatic: Nigeria), Uradhi (Northern Pama-Nyungan; Pama-Nyungan: Australia), Urubu-Kaapor (TupíGuaraní; Tupian: Brazil), Moroccan Berber (Berber; Afro-Asiatic: Morocco), and Kayardild (Tangkic: Australia) refrain completely from adding causative affixes to transitive verbs. Languages such as Abkhaz (North-West Caucasian: Georgia) and Basque (isolate: France and Spain) may restrict their causative affixes to both intransitive and transitive verbs, with ditransitive verbs failing to undergo morphological causativization. In other words, it is possible to formulate the following implicational statement: () Scope of morphological causativization ditransitive > transitive > intransitive Thus, if a language causativizes ditransitive verbs morphologically, it also causativizes transitive and intransitive verbs morphologically; if a language causativizes transitive verbs morphologically, it also causativizes intransitive verbs morphologically. Geographically speaking, morphological causatives are very widespread. In Song’s (b) sample of  languages, only  (.%) lack morphological causatives, and languages without morphological causatives are found in all major geographical areas: Africa, Eurasia, South East Asia–Oceania, Australia–New Guinea, North America, and South America. In each major geographical area, however, languages without morphological causatives are too few or scattered to reveal clear geographical patterns, except for possible clusters of such languages in South East Asia and north-western Australia.

13.3 Change of argument structure without change of valency In §.., the causative was described basically as a valencymanipulating operation that increases the valency of the basic verb by one argument. However, there are languages that introduce a new 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

argument into the argument structure of the causative verb, only if one of the arguments of the basic verb is reduced to an adjunct or omitted completely. In other words, morphological causativization of the basic verb in such languages results in no increase in valency but rather the valency of the causative verb is kept equal to that of the basic verb—in spite of the introduction of the causer argument. In Babungo, for instance, when the causative suffix -s is added to two-place verbs, the P argument of the basic verb must be expressed as an adjunct, which can be omitted without any difficulty. Thus, the P argument in the noncausative sentence in (a), zɔ̃, can either appear as an adjunct, nə̀ zɔ̃, or be omitted in the corresponding causative sentence in (b). ()

Babungo (Bantoid; Niger-Congo: Cameroon) a. ŋwə́ fèe zɔ̃ he fear-PFV snake ‘He was afraid of a snake.’ b. mə̀ fèsə̀ ŋwə̀ (nə̀ zɔ̃) I fear-CAUS-PFV him (with snake) ‘I frightened him with a snake.’

In Finnish, it is the causee that appears as an adjunct when transitive verbs are morphologically causativized.6 ()

Finnish (Finnic; Uralic: Finland) minä rakennutin talo-n muurarei-lla I build-CAUS house-DO bricklayers-INST ‘I make the bricklayers build the house.’

There also exist some languages that maintain the canonical number of arguments by not coding either the causee or some argument other than the causer, when basic verbs are morphologically causativized. In Qafar, there are two causative suffixes, one exclusively for intransitive verbs and the other for transitive verbs. Many causativized transitive verbs, unlike causativized intransitive ones, do not have the A argument of basic verbs (i.e. the causee argument) specified, as in:

6 Comrie (, b) calls this phenomenon ‘extended demotion’, as his theory, based on the hierarchy of grammatical relations, predicts that the causee nominal in () should assume indirect object, not an oblique relation.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 3 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H O U T C H A N G E O F V A L E N C Y

() Qafar (Lowland East Cushitic; Afro-Asiatic: Ethiopia, Eritrea, and Djibouti) ’oson ’garca gey-siis-ee-’ni they thief find-CAUS-they/PFV-PL ‘They caused the thief to be found/they caused someone to find the thief.’ So normally, in Qafar, the total number of arguments per causative verb, regardless of the transitivity of the basic verb, is no more than two. Songhai (or Sonrai) has a causative suffix -ndi, which can be added to intransitive and transitive verbs. When it is suffixed to ditransitive verbs, either the causee argument or the indirect object of the basic verb has to be left out. This explains why the following sentence involving a three-place verb neere ‘to sell’ is ambiguous. () Songhai (Nilo-Saharan: Mali) garba neere-ndi bari di musa se Garba sell-CAUS horse the Musa IO ‘Garba had Musa sell the horse’ or ‘Garba had the horse sold to Musa.’ In other words, there is a strict requirement in Songhai that there should be no more than three arguments per causative verb. Moreover, there are languages that avoid exceeding the maximum number of arguments by drawing on detransitivizing strategies already available in the grammar. In Halkomelem, morphological causativization can take place only if the basic verb is first detransitivized via the antipassive suffix -əm. Thus, (a) is ungrammatical, since the causative suffix -st is added directly to the transitive verb stem, q’wəl-ət-. In contrast, (b) is grammatical, since the transitive verb is antipassivized (i.e. detransitivized) prior to morphological causativization. () Halkomelem (Central Salish; Salishan: Canada) sɫéniʔ a. *ni cən q’wəl-ət-stəxw kwθə səplíl ʔə ɫə AUX SG bake-TR-CAUS DET bread OBL DET woman ‘I had the woman bake the bread.’ θə sɫéniʔ ʔə kwθə səplíl b. ni cən q’wə́l-əm-stəxw AUX SG bake-ANTIP-CAUS DET woman OBL DET bread ‘I had the woman bake the bread.’ In Blackfoot (Algonquian; Algic: Canada and US), there are two causative suffixes, namely -ippi and -atti, which apply to both intransitive and 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

transitive verbs. However, when these suffixes are added to them, transitive verbs must first be detransitivized via the suffix -a:ki. The net effect of this detransitivization is that the P argument of the basic verb is reduced to adjunct status and thus left out of the argument structure of the verb concerned. Note that in this language, the verb internally records arguments. Conversely, nonrecorded elements are by definition adjuncts or optional elements. After detransitivization of the two-place verb, only the erstwhile A argument is registered in the verb, not the erstwhile P argument. Only after that is the causative suffix added to the now detransitivized verb, whereby the causer nominal can also be registered in the verb as an argument. Thus, only the causer and causee nominals are recorded as arguments in the verb. In Blackfoot, therefore, the total number of arguments per causative verb never exceeds two owing to the detransitivization. What this means is that the number of arguments in causatives is exactly the same as in basic transitive sentences. Similarly, in Bandjalang (South-Eastern Pama-Nyungan; Pama-Nyungan: Australia), the causative suffix -ma can be added directly to intransitive verbs. However, when they undergo the morphological causativization, transitive verbs must first be detransitivized via the antipassive suffix -li. In Southern Tiwa, the causative verb registers only two arguments, that is, the causer and the causee nominals. When the causative suffix -’am is added to transitive verbs, the P argument of the basic verb is obligatorily incorporated into the verb complex (see §.. for noun incorporation). ()

Southern Tiwa (Kiowa-Tanoan: US) a. I-’u’u-kur-’am-ban SG.SBJ:SG.OBJ-baby-hold-CAUS-PST ‘I made you hold the baby.’ b. *’U’ude i-kur-’am-ban baby SG.SBJ:SG.OBJ-hold-CAUS-PST ‘I made you hold the baby.’

The sentence without incorporation of the P argument of the basic verb, as in (b), is ungrammatical. In Southern Tiwa, arguments should be registered in the verb complex, while they can optionally appear in full forms as well. Thus, ’u’ude ‘baby’ in (b) is intended to be an argument. But as the causative verb has already reached its full capacity of registering the arguments (i.e. the causer and causee nominals), thereby being unable to record the full nominal ’u’ude, the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 3 C H A N G E O F A R G U M E N T S T R U C T U R E W I T H O U T C H A N G E O F V A L E N C Y

sentence is ruled out as ungrammatical. In contrast, in (a) the same nominal is incorporated into the verb complex, thus effectively making the basic verb intransitive. Now, the sole argument of the detransitivized verb and the causer argument introduced by the causative suffix -’am can be registered in the verb without any difficulty. What the foregoing data suggest is that, regardless of whether the verb is basic or morphologically causativized, languages may allow only a certain number of arguments per verb (i.e. two or three) (for a detailed discussion of argument density control, see Song : ch. ). The net effect of the different grammatical strategies pressed into service to maintain the canonical number of arguments is no change of valency in spite of the introduction of the causer argument, although there may be a change of argument structure between the basic verb and the corresponding causative verb: addition of the causer argument offset by demotion to adjunct status or omission of one of the arguments of the basic verb. Another instance of a change of argument structure with no change of valency comes from noun incorporation (see §..). In some languages with noun incorporation, when one of the arguments is incorporated into the verb, the position vacated by that argument may be usurped by an adjunct, e.g. instrument, location, or even possessor (Mithun : –). In Tupinambá, the possessor can be promoted to P when the possessed noun is incorporated into the verb. Compare (a) and (b). () Tupinambá (Tupí-Guaraní; Tupian: Brazil) a. s-oβá a-yos-éy his-face I-it-wash ‘I washed his face.’ b. a-s-oβá-éy I-him-face-wash ‘I face-washed him.’ In (b), the possessor, as P, is represented on the verb by the prefix s-, whereas in (a), without noun incorporation, the P argument is s-oβá ‘his face’, and it is thus registered on the verb by the prefix yos-. In Yucatec, the argument position vacated by the incorporated P argument (i.e. če’ ‘tree’) is taken up by the locative phrase (i.e. in-kool ‘my cornfield’), the newly acquired argument status of which is indicated by its loss of the locative preposition (i.e. ičil). 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

()

Yucatec (Mayan: Mexico) a. k-in-č’ak-Ø-k če’ ičil in-kool INCMP-I-chop-it-IPFV tree in my-cornfield ‘I chop the tree in my cornfield.’ b. k-in-č’ak-če’-t-ik in-kool INCMP-I-chop-tree-TR-IPFV my-cornfield ‘I clear my cornfield.’

In some languages with noun incorporation, a relatively general noun stem is incorporated into the verb, and at the same time an additional, more specific, external NP is used to ‘identif[y] the argument implied by the [incorporated noun]’ (Mithun : ). In Gunwinggu, for instance, a general noun stem dulg ‘tree’ is incorporated into the verb, with a more specific external nominal mangaralaljmayn ‘cashew nut’ occurring as an argument. ()

Gunwinggu (Marne; Gunwinyguan: Australia) . . . bene-dulg-naŋ mangaralaljmayn they.two-tree-saw cashew.nut ‘ . . . They saw a cashew tree.’

Thus, noun incorporation does not necessarily involve argument reduction. When one of the arguments of the basic verb gives up its argument status as a consequence of being incorporated into the verb, its ‘vacated’ argument position may be taken up by an adjunct or some external element, as it were.

13.4 Concluding remarks We have discussed how argument structure can be manipulated with or without a change of valency. Argument structure can be manipulated so that the total number of arguments can be reduced or increased (i.e. a change of valency). Argument reduction is illustrated by passives, antipassives, and noun incorporation, and argument addition by applicatives and causatives. In passives and antipassives, one argument of the basic verb is reduced to an adjunct or left unexpressed. In noun incorporation, one argument of the basic verb is incorporated into the verb itself to the effect that it loses argument status. Applicatives involve promotion of an adjunct (e.g. instrumental, locative) to argument status, 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

13 . 4 C O N C L U D I N G R E M A R K S

while causatives add a new argument (i.e. the causer) to the argument structure of the basic verb. Moreover, argument structure can be restructured without giving rise to a change of valency. This kind of argumentstructure manipulation is attested in causatives and noun incorporation. In the case of causatives, one of the arguments of the basic verb is reduced to an adjunct or removed completely as a new argument (i.e. the causer) is introduced into the argument structure by causative morphology. In the case of noun incorporation, more or less the same happens in that the argument position vacated by the incorporated noun may be taken up by an adjunct. The outcome of this kind of valency manipulation is that the same number of arguments as that of the basic verb is strictly maintained. Lastly, it is worth pointing to Wichmann (), who finds a statistically significant, albeit not particularly strong, correlation between argumentreducing operations such as passives and antipassives as well as a negative correlation between the argument-increasing operation of causatives and other (argument-decreasing) operations. Wichmann’s work is based on subsets of the data assembled in the Leipzig Valency Classes Project (Hartmann et al. ; Malchukov and Comrie ), which is designed to study coding patterns and alternations in thirty-five or thirty-six languages with respect to eighty verb meanings, ranging from BREAK to BE A HUNTER, e.g. how the argument(s) is(are) coded for each verb meaning or whether a causative can or cannot be formed on each verb meaning. Wichmann’s observations regarding the positive or negative correlations between the various valency manipulating operations are not totally unexpected, but it goes without saying that they need to be tested further against a far greater number of languages.

Study questions 1. Consider the so-called Locative Alternation phenomenon in English, as exemplified in () and (). In (), the nominal the wall has a locative role, as coded with the locative preposition on. In (), the same nominal functions as P, as indicated by its lack of coding and immediate postverbal position. The P status of the nominal in question is further illustrated by (), in which it appears as S in the passive. (1) The child sprayed paint on the wall. (2) The child sprayed the wall with paint. (3) The wall was sprayed with paint by the child.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

With the locative nominal promoted to P, () looks very much like the applicative version of (). However, linguists generally do not regard () as an instance of the applicative construction. Explain why you agree or do not agree with this. (Note that a similar alternation occurs between () and () for the instrumental nominal paint.) 2. Nomatsiguenga (Pre-Andine Arawakan; Arawakan: Peru) has a welldeveloped applicative construction, covering a wide range of semantic roles for the applied object. Identify the semantic roles of the applied object in the following examples. ariberito tiapa () pablo i-pë-ne-ri Paul he-give-APPL-M Albert chicken ‘Paul gave the chickens corn for Albert.’ () pablo i-hoka-te-ta-be-ka-ri Paul he-throw-APPL-EP-FRUST-REFL-M ‘Paul threw his knife toward Albert.’

singi corn

ariberito Albert

() pablo i-kenga-mo-ta-h-i-ri Paul he-narrate-APPL-EP-again-TENSE-M ‘Paul narrated it again in Albert’s presence.’

ariberito Albert

pablo () juan i-komota-ka-ke-ri John he-dam.stream-APPL-IND-TENSE-M Paul ‘John dammed the river branch with Paul.’ () pablo i-kisa-biri-k-e-ri Paul he-be.angry-APPL-IND-TENSE-M ‘Paul was angry on account of John.’ () ni-ganta-si-t-ë-ri I-send-APPL-EP-TENSE-M ‘I sent him for pills.’

i-gotsirote his-knife

otsegoho river-branch

juan John

hompiki pills

pi-nets-an-t-ma-ri hitatsia negativo () ora that you-look.at-APPL-EP-TENSE-FUT.REFL-M name negative ‘Look at it (the sun during an eclipse) with that which is called a negative.’ 3. The pragmatic function of the passive is said to be: to ‘topicalize’ or ‘foreground’ a particular nominal in the clause. For instance, the passive in (a)—corresponding to the active clause in (b)—draws attention to the P of the active clause by turning it into the sole argument (i.e. S) and by demoting the A of the active clause to an adjunct. ()

a. The woman kicked the burglar. b. The burglar was kicked (by the woman).

As the reader will recall from §.., Siewierska () reports that in her sample of  languages, .% ( languages) do not have passives. Discuss some of the strategies—as mentioned in Siewierska ()—that



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

FURTHER READING

languages without passives may draw upon in order to ‘topicalize’ a nonsubject nominal. 4. Polinsky (b) suggests that the dearth of applicatives in Eurasia may be related to the widespread presence in this macroarea of rich case-coding of nominals, as opposed to verbs (see Peterson :  for a similar but more general correlation between the applicative and the locus of case coding). Evaluate Polinsky’s suggestion with an eye to explaining why the applicative does not seem to be ‘compatible’ with rich case-coding of nominals. 5. Morphological causativization involves the addition of a causative affix to a basic verb, as we have seen in §... There are languages that rely on other unusual means of causativization that do not involve the addition of a causative affix. For instance, consider the following basic and causative verb pairs in Lahu (Burmese-Lolo; Sino-Tibetan: China, Thailand, and Myanmar) and discuss the different ways in which the causative verbs are produced from, or related to, the basic verbs. Verb Bases dɔ̀ ‘drink’ dὲ ‘come to rest’ mɔ̀ ‘see’ câ ‘eat’ nɔˆ ‘be awake’ dû ‘dig’ lὲ? ‘lick, eat’ vә̀? ‘wear’ và? ‘hide (oneself)’ tɔ̀? ‘catch fire’ yi­̀? ‘sleep’

Causative Verbs tɔ ‘give to drink’ tε ‘put down’ mɔ ‘show’ cā ‘feed’ nɔˉ ‘awaken, rouse’ tū ‘bury (as a corpse)’ lέ ‘feed’ fi­́ ‘clothe, dress someone’ fá ‘hide (something)’ tú ‘set fire to, kindle’ í ‘put to sleep’

Further reading Keenan, E. L. and Dryer, M. S. (). ‘Passive in the World’s Languages’, in T. Shopen (ed.), Language Typology and Syntactic Description, nd edn. Cambridge: Cambridge University Press, –. Kulikov, L. (). ‘Voice Typology’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –. Malchukov, A. and Comrie, B. (eds.) (). Valency Classes in the World’s Languages, vol. : A Comparative Handbook. Berlin: Mouton de Gruyter. Mithun, M. (). ‘The Evolution of Noun Incorporation’, Language : –. Peterson, D. A. (). Applicative Constructions. Oxford: Oxford University Press. Polinsky, M. (a). ‘Antipassive Constructions’, in M. S. Dryer and M. Haspelmath (eds.), The World Atlas of Language Structures Online. Leipzig: Max Planck



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

VALENCY-MANIPULATING OPERATIONS

Institute for Evolutionary Anthropology. Available at http://wals.info/ chapter/. Polinsky, M. (b). ‘Applicative Constructions’, in M. S. Dryer and M. Haspelmath (eds.), The World Alas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Available at http://wals.info/chapter/. Siewierska, A. (). Passive: A Comparative Linguistic Analysis. London: Croom Helm. Siewierska, A. (). ‘Passive Constructions’, in M. S. Dryer and M. Haspelmath, (eds.), The World Alas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Available at http://wals.info/chapter/. Song, J. J. (). Causatives and Causation: A Universal-Typological Perspective. Harlow: Addison Wesley Longman.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 Person marking

14.1 Introduction



14.2 Morphological form: variation and distribution



14.3 Paradigmatic structure: towards a typology



14.4 Concluding remarks



14.1 Introduction Imagine an ordinary situation in which the person that is talking (i.e. the speaker) informs the person that s/he is talking to (i.e. the addressee) which of the two, the speaker or the addressee, should do X (i.e. I must do it vs You must do it). For this and many similar situations, languages must have a consistent means of referring to the speaker and the addressee unequivocally. Otherwise, it will be difficult, if not impossible, to indicate whether it is the speaker or the addressee that should be the one to do X; in the absence of linguistic means, the speaker may have to resort to a non-linguistic or kinesic means to refer to herself/himself or to the addressee, as typically in sign languages, e.g. by pointing a finger at herself/himself or the addressee. Languages indeed draw on linguistic expressions specialized for encoding the speaker and the addressee, known as the speech act participants. The speech-act-participant expressions (e.g. I and you) are specialized in the sense that they must consistently refer to the discourse roles of the speech act participants and nothing else. In contrast, nominals such as Mummy or Sarah normally refer to non-speech-act-participant or third-party entities although they can potentially refer to the speaker or the addressee, as in special registers such as motherese, e.g. a woman telling her child Mummy loves Sarah, where Mummy refers to the

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

speaker and Sarah to the addressee. There is, however, nothing in nominals such as Mummy or Sarah that indicates that they must refer to the speech act participants. This can be demonstrated easily as follows. Imagine, this time, that the child says Mummy loves Sarah to the mother. Now, Sarah refers to the speaker, and Mummy to the addressee. In other words, Mummy and Sarah do not consistently refer to the speaker or the addressee. In contrast, the speech-act-participant expressions I and you, as in I love you, must always refer to the discourse role of the speaker and that of the addressee, respectively. Thus, if the mother says I love you to the child, I refers to the speaker (= the mother), and you to the addressee (= the child); if the child says I love you to the mother, I still refers to the speaker, and you to the addressee, although this time the child is the speaker and the mother the addressee. Traditionally, the speech-act-participant expressions (e.g. I and you in English) are grouped together with similarly specialized third-party referring expressions (e.g. she, he, and they in English) under a single grammatical category of personal pronouns (but see below). There is one important difference between these two types of specialized expression, however. The interpretation of the speech-act-participant expressions depends on who makes an utterance to whom. In other words, the speech-act-participant expressions are interpreted on the basis of the non-linguistic context of the utterance that contains them (i.e. who is assuming the role of the speaker or the addressee?). In contrast, the interpretation of specialized third-party expressions does not depend on the non-linguistic context of the utterance; they should instead be interpreted on the basis of the linguistic context of the utterance in which they appear, e.g. Sara loves her mother, where her refers to Sara, which appears in the same utterance as her does. Technically speaking, therefore, the speech-act-participant expressions are deictic while the third-party referring expressions are anaphoric. This difference notwithstanding, the specialized third-party referring expressions are subsumed under the same grammatical category as the speech-act-participant expressions, because, as will be shown in what follows, they together constitute what is known as a person-marking paradigm. When talking about these speech-act-participant and thirdparty referring expressions under one grammatical category, we refer to the speaker as the first person, the addressee as the second person, and the third party as the third person. This three-way distinction makes much sense, because there should be three different types of 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 1 I N T R O D U C T I O N

person, i.e. the person who is talking (= the speaker), the person who is being talked to (= the addressee), and the rest (= the third party). These three person concepts comprise the grammatical category of person marking. The first-, second-, and third-person referring expressions, in turn, are known as person markers or, collectively, person marking. One prominent characteristic of person markers is that they tend to be ‘light’. That is to say that person marking is not lengthy or heavy, e.g. consisting of multiple words.1 Otherwise, linguistic interaction could be rather cumbersome or time-consuming. Imagine a situation where the speaker has to refer to himself or herself by using a lengthy expression such as the person who is speaking or to the addressee by using a similarly lengthy expression such as the person who is being addressed. These hypothetical person markers may perhaps work once or twice but using them over and over again will be extremely tedious and inefficient. Thus, it makes sense that person markers need to be ‘light’ for the sake of efficient communication (see Zipf ’s (: ) classic dictum: ‘High frequency is the cause of small magnitude’). As noted earlier, personal pronouns such as I, you, she, he, they in English are typical examples of person marking. However, there are other types of person marking that make the same kind of person distinction. In many languages, nominals with certain grammatical properties agree with other elements in the sentence, typically verbs (see §.). In English, for instance, in addition to the personal pronouns, third-person singular subject nominals agree with verbs in the present tense, as in (a), where this agreement is signalled by the suffix -s on verbs. ()

English (Germanic; Indo-European: UK) a. She/He/It loves the children. b. I/We/You/They love the children.

The agreement marker -s indicates that the subject nominal is third person, as opposed to first or second person (and also that the sentence is in the present tense). In other words, what the agreement marker in question does is to make a distinction between third person (= nonspeech act participants), and first or second person (= speech act

1

There are languages that may have lengthy person markers, but they are a very small minority. In Thai, for instance, there are special person markers that consist of multiple words, e.g. tȃajlfàa’laʔɔɔŋʼthúliiʼphrábàad’, literally meaning ‘the one who is holding the speaker under the dust of his foot’ and used only when addressing the king.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

participants). Thus, agreement markers, together with personal pronouns, fall under the purview of person marking. For this reason, we will adopt a more generic or inclusive term ‘person markers’ in preference to the term personal pronouns. One other reason for choosing the generic term person markers is that in some languages, it is not clear whether what may look like personal pronouns are really pronouns. In such languages, the so-called pronouns tend to consist of an invariant generic pronominal root and person-marking affixes, the former historically derived from a lexical item meaning ‘person’, ‘body’, ‘self ’, ‘to be’, or ‘to exist’. In Mbay, for instance, the emphatic form of the verb -ìī ‘to be’ combines with person prefixes to produce what may otherwise look like personal pronouns. () Mbay (Bongo-Bagirmi; Central Sudanic: Chad) a. J-ìī kòoń PL-be only ‘It’s only us.’ b. M-ìī-ň àí SG-be-VENTIVE NEG ‘I don’t have it.’ (Lit. I am not with it.) Particularly noteworthy is the fact that the ventive suffix, which is said to apply to verbs only, is attached to the generic pronominal root in (b). This suggests that the person markers in Mbay are still very much of verbs. To wit, while personal pronouns may be typical examples of person marking, there are other equally important types of person marker that need to be taken into consideration. The universal need for person marking notwithstanding, there seems to be a wide range of cross-linguistic variation on person marking in the world’s languages (Cysouw ; Siewierska ). While there are three different person concepts in the person category, this does not mean that languages have only three person markers (i.e. one person marker for each person). Languages may actually have more or fewer than three person markers. For instance, Madurese (Malayo-Sumbawan; Austronesian: Indonesia) is said to have only two person markers, i.e. sengkoq ‘I/me’ and tang ‘my’,2 while Fijian (Oceanic; Austronesian: Fiji)

For the second and third persons, words meaning ‘metaphysical body/spirit’ and ‘sole/ alone’ are used in conjunction with a definite marker (Siewierska : ). 2



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 1 I N T R O D U C T I O N

is reported to have as many as  person markers (Siewierska : ). The majority of the world’s languages are spread all over between these two ‘extreme’ examples in terms of number of person markers, some with more or fewer person markers than others. There are two good reasons for this cross-linguistic variation. First, many languages have more than one set of person markers, and these different sets of person markers may differ in terms of morphological form. Broadly speaking, there are two different formal types of person marker: independent and dependent. Person markers are independent if they are separate words that may be stressed. Otherwise, person markers are dependent in the sense that they are somehow morphologically dependent on other words and cannot be stressed, e.g. affixes, clitics. Siewierska (: ) makes an observation that in the majority of  languages surveyed in her study, dependent person markers have corresponding independent person markers, but the converse is definitely not the case. For instance, Spanish provides one example with both independent and dependent person markers. This language has independent person markers (e.g. yo vs tú vs él/ella) and also inflectional endings on verbs, i.e. agreement markers (e.g. hablo vs hablas vs habla), as in: ()

Spanish (Romance; Indo-European: Spain) a. Yo hablo inglés SG speak.SG.PRES English ‘I speak English.’ b. Tú hablas inglés SG.PLAIN speak.SG.PLAIN.PRES English ‘You speak English.’ c. Él/Ella habla inglés SG.m/SG.F speak.SG.PRES English ‘He/She speaks English.’ d. Ellos/Ellas hablan inglés PL.m/PL.F speak.PL.PRES English ‘They speak English.’

Marúbo also has independent person markers (e.g. ‘wan for third person singular) as well as dependent person markers (e.g. an= for third person singular), but in this language the dependent markers are 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

clitics, which can attach either to the beginning or the end of the verb phrase, as in (). () Marúbo (Panoan; Ge-Pano-Carib: Brazil) a. ‘Wan-tun an=‘pani-Ø tu’raš-a-ka SG-ERG SG=net-ABS tear-AUX-IMM.PAST ‘He has torn the net.’ b. Ɨa-Ø ɨn=wi’ša-i-ki SG-ABS SG=write-PRES ‘I am writing.’ c. Ɨa-n ‘matu-Ø ɨn=ʃu’tun-ai SG-ERG PL-ABS SG=push-IMM.PST ‘I have pushed you.’ The second reason for the wide range of cross-linguistic variation in person marking is that the grammatical category of person typically interacts with number and possibly also with other grammatical categories such as gender and case. The examples from Spanish and Marúbo, in () and (), respectively, demonstrate the interaction between person and number in person marking. For instance, the independent person marker yo (SG) in (a) contrasts with nosotros/ nosotras (PL.M/PL.F) in () in terms of number, that is, singular vs plural. (Note that the first person plural independent markers also bear a gender distinction.) A similar comment can also be made of the dependent person markers in Spanish. The first person singular form of the verb in (a), hablo, contrasts with the first person plural form of the verb, namely hablamos, as in (). () Spanish (Romance; Indo-European: Spain) Nosotros/Nosotras hablamos inglés PL.m/PL.F speak.PL.PRES English ‘We speak English.’ The presence of number distinctions in person marking was once thought to be a universal fact about the world’s languages (e.g. Greenberg b: ), and while this may indeed be the case in the majority of languages, there are languages that lack number distinctions altogether in person marking. For instance, Múra Pirahã has both independent and dependent (specifically clitic) person markers, as enumerated in (). 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 1 I N T R O D U C T I O N

()

Múra Pirahã (Mura: Brazil) Independent ti First Person Second Person gíxai Third Person hiapióxio

Dependent ti gí/gíxa hi/xi/xís

In Múra Pirahã, person markers do not show number distinctions and number must instead be expressed only indirectly, that is, by conjoining number-neutral person markers, as in (). ()

Múra Pirahã (Mura: Brazil) ti gíxai pío ahápií   also go ‘You and I will go (i.e. we will go).’

Languages such as Múra Pirahã, however, are more the exception than the norm, and the absence of number distinctions in dependent person markers tends to be complemented, as it were, by the presence of number distinctions in independent person markers (Siewierska : ). The way person, number, and other grammatical categories do or do not interact with each other is typically studied in the context of paradigmatic structure. A paradigm is a set of linguistic expressions that can potentially appear in the same syntactic or syntagmatic position in the structure of a language. For instance, the English subject person markers, i.e. I, we, you (singular), you (plural), he, she, it, and they constitute one person paradigm; any one of these person markers can potentially occupy the subject position in a clause. The English non-subject or object person markers, i.e. me, us, you (singular), you (plural), him, her, it, and them, constitute another person paradigm; any one of these person markers can occupy the object position in a clause. Languages may vary in terms of how person/number distinctions are made within person paradigms. In other words, languages may structure person paradigms differently. The extent of this variation can perhaps be best appreciated by looking at two extremely rare examples, both from English. For instance, in the English independent person markers the number distinction is neutralized for second person (i.e. you for both singular and plural), whereas the number distinction is maintained for first and third person (i.e. I vs we, he/she/it vs they). That this is a most unusual situation among the world’s languages is perhaps reinforced by the fact that certain non-standard varieties of 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

English make use of such forms as youse or you-all with the effect that the number distinction is maintained for all three persons. Similarly, in the English dependent person marking, only the third person singular, albeit in the present tense, is indicated by the agreement suffix -s on the verb, whereas there is no explicit (i.e. zero) agreement in all the other persons, including the third person plural. This also is an extremely rare situation in that there are no other examples in Cysouw’s () sample of over  languages that display the same paradigmatic structure. The foregoing two reasons for the cross-linguistic variation in person marking constitute the basis that the rest of the chapter is organized on. First, we will consider different types of morphological form that person markers assume in the world’s languages. In addition, we will examine how independent and dependent person markers, individually and together, are distributed according to different types of case alignment (see Chapter ) as well as grammatical relations (see Chapter ). Second, we will survey some of the paradigmatic structures identified in Cysouw (). In Cysouw’s (: ) large-scale study, as many as  different paradigmatic structures are identified and described. In view of such a high level of variation, it is very important to understand the frequencies of different paradigmatic structures and why their frequencies are the way they are. From this will we be able to draw interesting generalizations on person marking. In the interests of space, we will focus on person and number, and their interaction in paradigmatic structure. The reason why number is chosen over other grammatical categories is that number is far more frequently reflected in person marking in the world’s languages. Put differently, if there is something that person interacts with, it is number. In contrast, gender, for obvious reasons, is restricted largely to third person.3 The speaker and the addressee can easily determine each other’s gender (hence no point in coding gender), whereas the gender of third-party entities may not always be obvious from the nonlinguistic context of linguistic communication. Similar comments can be made of case in person marking. Person interacts with case in casecoding languages only. There is no point in talking about case in person marking in non-case-coding languages. 3 For instance, Siewierska (: ) notes that in her -language sample that have gender in their independent person markers,  (%) have gender in third person as opposed to  (%) in second person and three in first person (.%).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 2 M O R P H O L O G I C A L F O R M : V A R I A T I O N A N D D I S T R I B U T I O N

14.2 Morphological form: variation and distribution Person markers may take different forms, whether independent or dependent. We have already mentioned the main morphological difference between independent and dependent person forms in the previous section—the presence or absence of morphological independence in conjunction with the ability to take primary stress. Unlike independent person markers, however, dependent person markers may divide themselves further into four types: weak, clitic, bound, and zero (Siewierska : ). We have already exemplified bound and clitic person markers in the previous section (e.g. () and (), respectively). These two types both have to attach themselves to other host elements, the main difference between them being that bound person markers must attach themselves phonologically to roots or stems, and clitic person markers to phrases or designated syntactic positions. Weak person markers are morphologically in between independent and clitic person markers, as it were. They are less morphologically independent than independent person markers, and less morphologically dependent than clitic person markers. In other words, unlike clitic person markers, weak person markers do not attach themselves to host phrases, but unlike independent person markers, they may be subject to certain distributional constraints. For instance, Woleaian has weak in addition to independent person markers. The independent person markers (e.g. gaang, iir) may be used in equational clauses or in conjunction with the immediately following focus marker mele but the weak person markers (e.g. i, re) cannot. ()

Woleaian (Oceanic; Austronesian: Micronesia) a. Gaang (*i) Tony SG (*SG) Tony ‘I am Tony.’ b. Iir mele re mwali PL FOC PL hid ‘They were the ones who hid.’

There are no universal defining characteristics of weak person markers, and they thus probably have to be distinguished from independent ones in a language-specific manner. Finally, zero person markers, in Siewierska’s (: –) sense, have ‘absolutely no phonological form corresponding to the person 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

interpretation’. For instance, in Korean, zero forms, in lieu of explicit person markers, are commonly attested in naturally occurring speech, as in (). () Korean (isolate: Korea) A: kiho-ka ssu-te-n khomphyuthe-ka eti Keeho-NOM use-RETRO-REL computer-NOM where ka-ss-e? go-PST- Q ‘Where is the computer that Keeho used?’ B: Ø Ø peli-ess-e throw.away-PST-IND ‘(He, I/We) threw (it) away.’ The subject zero form (Ø) could be interpreted to refer to the speaker or to Keeho (or to someone else for that matter), depending on the context of the utterance made by A and other (shared) background information (e.g. Keeho or the speaker has a tendency to throw things away), while the object zero form (Ø) is interpreted to refer to the computer in question (again on the basis of the context of the utterance made by A). Note that the linguistic interaction in () is not representative of any special register whatsoever, e.g. telegraphic or elliptical speech, but it is as natural as it can get. To wit, the non-use of explicit person-marking expressions, whether pronominal or not, is the norm rather than the exception in naturally occurring speech in Korean. Based on a variety sample of  languages, Siewierska () sets out to ascertain whether the different morphological types of person markers have any correlations with other structural properties. In the remainder of this section, we will examine two such correlations, one with case alignments and the other with grammatical relations. 14.2.1 Person marking and case alignment First, let’s look at Siewierska’s (: ) data on the distribution of independent and overt dependent person markers (i.e. excluding zero person markers) in relation to different case alignment systems, as presented in Table .. A general discussion of various theories attempting to account for the frequencies of the different alignment systems has already been provided in §., §., and §., which the reader may wish to refer to. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 2 M O R P H O L O G I C A L F O R M : V A R I A T I O N A N D D I S T R I B U T I O N

Table 14.1 Case alignment of independent and overt dependent person markers

Alignment type

Independent forms n =  (%)

Dependent forms n =  (%)

Neutral



.



.

Nom–acc



.



.



.



.

Active–stative



.



Tripartite



.





Hierarchical







.

Split*



.



.

Erg–abs

.

Note: Erg–abs = Ergative–absolutive; Nom–acc = Nominative–accusative; *any form of split alignment (e.g. nom–acc/erg–abs, active–stative/tripartite) other than that involving neutral and non-neutral, which have been included under the relevant non-neutral alignments

A number of interesting patterns emerge from Table .. Neutral alignment is attested in independent person markers more than twice as frequently as in dependent person markers. In well over % of the sample languages, dependent person markers operate within nominative–accusative alignment, as opposed to just under % in the case of independent person markers. There is a clear preference (read: more than chance frequency) for nominative–accusative alignment in dependent markers. Put differently, not only languages with neutral alignment for independent person markers but also languages with nonneutral alignments for independent person markers favour nominative– accusative alignment over all other alignment systems for dependent person markers. For instance, languages with ergative–absolutive alignment for independent person markers and with nominative–accusative alignment for dependent person markers include Byansi (Bodic; Sino-Tibetan: India), Copainalá Zoque (Mixe-Zoque: Mexico), Hua (Eastern Highlands; Trans-New Guinea: Papua New Guinea), Ingush (Nakh; Nakh-Daghestanian: Russia), Tauya (Madang; Trans-New Guinea: Papua New Guinea), Una (Mek; Trans-New Guinea: Indonesia), and Australian languages including Djaru (Western PamaNyungan; Pama-Nyungan), Malakmalak (Northern Daly), Pintupi (Western Pama-Nyungan; Pama-Nyungan), and Warlpiri (Western Pama-Nyungan; Pama-Nyungan). Siewierska (: ) notes that the converse situation, namely nominative–accusative alignment for 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

independent person markers and ergative–absolutive alignment for dependent person markers, occurs only rarely, with certain grammatical constraints on the alignment systems. Such rare languages include Badjiri (Karnic; Pama-Nyungan: Australia), Mundurukú (Tupian: Brazil), and Sahaptin (Sahaptin; Penutian: US). Probably the most noticeable difference between independent and dependent person markers in Table . comes from active–stative alignment, which is extremely rare in independent as opposed to dependent person markers (.% vs .%). Active–stative alignment for dependent person markers is actually slightly more common than ergative– absolutive alignment is. This tendency is said to be strong especially in North America and South America, but also evident in New Guinea, South East Asia, and Oceania. Tripartite alignment does not seem to show any preference for either independent or dependent person markers. The two languages exhibiting tripartite alignment in independent person markers are Lower Umpqua (Athapaskan; Na-Dene: US) and Wangkumara (Central Pama-Nyungan; Pama-Nyungan: Australia). Lastly, hierarchical alignment is attested for dependent person markers only. 14.2.2 Person marking and grammatical relations Siewierska (: –) also examines the distribution of independent and dependent person markers in relation to grammatical relations such as subject and object (see Chapter ). First, she makes an observation that in languages that make much use of dependent person markers, independent person markers often operate ‘in a restricted set of circumstances’, e.g. used only as single word responses to questions, as in Acoma (Keresan: US) and Wari (Chapacura-Wanham: Brazil). Nonetheless, Siewierska does not find any cross-linguistic restrictions on what grammatical relations can or cannot be borne by independent person markers. In so far as dependent person markers are concerned, they are much more commonly attested with arguments than with adjuncts. This makes sense because person markers, especially first and second person markers, have human referents, and human referents tend to take on prominent grammatical relations such as subject and object, rather than adjuncts. The distribution of dependent person markers with respect to four grammatical relations is presented in Table . (Siewierska : ). In Table ., Subject corresponds to A (i.e. transitive subject), Object to P and also to the argument of a ditransitive clause that 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 2 M O R P H O L O G I C A L F O R M : V A R I A T I O N A N D D I S T R I B U T I O N

Table 14.2 Dependent person markers (as a group) and grammatical relations

Dependent markers No. of languages Percentage (%)

Subject n = 

Object n = 

Object n = 

Oblique n = 

















shares the same person marking as P of a monotransitive clause (see §.), Object to the other object of a ditransitive clause, and Oblique to any argument that does not bear the subject or object relation. Over four-fifths of Siewierska’s -sample languages use dependent person markers for Subject, and just over two thirds for Object. In comparison, only small numbers of languages employ dependent person marking for Object and Oblique. In other words, the frequencies of dependent person markers for these grammatical relations can be presented in the form of a hierarchy, as in (). () Subject > Object > Object > Oblique This hierarchy of grammatical relations is basically identical to what was discussed in §.. The hierarchy in () also makes predictions for individual languages to the effect that the availability of dependent person markers for a given grammatical relation on the hierarchy entails the availability of dependent markers for all grammatical relations higher on the hierarchy. For instance, if a language uses dependent person markers for Object, then it will use dependent person markers for Subject and Object, and so on. While this is generally the case with Siewierska’s sample languages, there do exist exceptions, the majority of which are languages with clitic or bound person markers for Object but no dependent person markers for Subject. ||Ani (Khoe; Khoe-Kwadi: Botswana), Barai (Koiarian; Trans-New Guinea: Papua New Guinea), Karo-Batak (North-West Sumatra-Barrier Islands; Austronesian: Indonesia), Nivkh (isolate: Russia), Panyjima (Western Pama-Nyungan; Pama-Nyungan: Australia), and Sema (Kuki-Chin; Sino-Tibetan: India) are such exceptional languages. The four types of dependent person markers, differing in terms of morphological dependence, can also be compared with each other in terms of phonological substance. Weak person markers have the largest amount of phonological substance, as they are very much like 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

independent words, albeit with distributional restrictions. Clitic person markers attach themselves to host elements, which renders them phonologically dependent as if they were affixes. As a consequence, clitic person markers tend to have less phonological substance than weak person markers. Bound person markers, which typically are affixes, have less phonological substance than clitic person markers. Zero person markers by definition have the least or, more specifically, no phonological substance. Siewierska (: ) ascertains how these four types of dependent markers distribute themselves over the hierarchy of grammatical relations in (). Her investigation reveals that in % of her sample languages, the less phonological substance dependent person markers have, the higher the grammatical relations that they bear are on the hierarchy. Exceptions include languages that have zero person markers for Object but not for Subject, e.g. Finnish (Finnic; Uralic: Finland), Kewa (Engan; Trans-New Guinea: Papua New Guinea), and Imbabura Quechua (Quechuan: Ecuador), languages that have bound person markers for Object but weak person markers for Subject, e.g. Kiribatese (Oceanic; Austronesian: Micronesia), Kusaiean (Oceanic; Austronesian: Micronesia), Tigak (Oceanic; Austronesian: Papua New Guinea), and Yapese (Austronesian: Micronesia), and finally one language that uses zero person markers for Object but not for Object, i.e. Trumai (isolate: Brazil). Siewierska (: –) explains that the reason why dependent person markers have the strong tendency to prefer grammatical relations high on the hierarchy is that the higher grammatical relations referents bear, the more cognitively accessible they are (Givón ; Ariel ). Cognitively more accessible referents do not need as much coding as cognitively less accessible referents do, because cognitive accessibility means less mental energy required to retrieve referents from short-term memory. Thus, it makes much sense that Subject tends to receive less coding (read: less phonological substance) than Object does, for instance. As the reader will recall from §.., the hierarchy of grammatical relations was interpreted as a hierarchy of markedness: the lower grammatical relations are on the hierarchy, the more marked they are. Thus, Subject is the least marked while Oblique is the most marked. Cognitive accessibility can also be understood in terms of markedness: the more cognitively accessible X is, the less marked X is. Differences in markedness have implications for coding. The more marked X is, the more coding X receives. Conversely, the less marked X is, the less coding 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

X receives. As the reader will also recall from §., Haspelmath () makes an attempt to replace the concept of markedness with frequency of use. In Haspelmath’s alternative theory, grammatical relations high on the hierarchy receive less coding than grammatical relations low on the hierarchy because the former occur more frequently than the latter (again, Zipf ’s (: ) classic dictum: ‘High frequency is the cause of small magnitude’). For instance, Subject occurs in all sentence types but Object occurs in a much smaller number of sentence types; Subject receives less coding than Object. (See Chapter  for further discussion, including the criticism on frequency of use.) 14.3 Paradigmatic structure: towards a typology As pointed out in §., English is a language that possesses a most unusual kind of person marking. In this language, independent person marking makes a three-way person distinction between first, second, and third person in singular as well as plural dimensions, as in: () English (Germanic; Indo-European: UK) Plural Singular First Person I we Second Person you you Third Person he/she/it they Note, however, that the second person, unlike the first and third person, does not make a number distinction between singular and plural forms; the same morpheme you is used instead. This absence of a number distinction between the second person singular and plural—in conjunction with the presence of a number distinction in the remaining persons—is a rarity in the world’s languages indeed. Similarly, dependent marking in English behaves in a most unusual manner in that when the subject nominal is the third person singular, the suffix -s appears on the verb in the present tense, while the remaining person/number combinations trigger no such verbal agreement, as in: () English (Germanic; Indo-European: UK) Plural Singular First Person -Ø -Ø Second Person -Ø -Ø Third Person -s -Ø 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

The two sets of person markers in () and () illustrate how person marking morphology can be organized differently—even in one and the same language. This kind of organization is referred to generally as paradigmatic structure. As the reader recalls, a paradigm is a set of linguistic expressions that can potentially appear in the same syntactic or syntagmatic position in the structure of a language. The difference between the paradigmatic structure in () and that in () lies in the presence or absence of homophony among the person markers. Homophony refers to two or more person/number values combined into the reference of one and the same morpheme (Cysouw : ). In (), the second person singular and second person plural exhibit homophony. In (), only the third person singular is marked by a non-zero suffix, with the remaining person/number values zero-marked (i.e. homophony). Moreover, the extent of homophony in () is much greater than in (); homophony in () involves five values (i.e. first person singular and plural, second person singular and plural, and third person plural) while that in () two (i.e. second person singular and plural). The fact that the two paradigmatic structures in English are very rarely found in the world’s languages leads further to an interesting question as to what kinds of paradigmatic structure are attested in the world’s languages and also which paradigmatic structures are more or less common than which. This is the main topic of this section, based largely on the most comprehensive study of paradigmatic structures of person marking in Cysouw (; cf. Bhat ; Siewierska ). But first, there is some preliminary discussion that needs to be had. In person marking, paradigmatic structure can in principle be examined in conjunction with other grammatical categories that the person category may interact with. The grammatical categories that may combine with person include number, gender, and case among others. Typically, person marking is always investigated in conjunction with number, and to a lesser extent, with gender, for languages that make gender distinctions in person marking (e.g. English third person singular, he/she/it). In this section, only number will be investigated in conjunction with person in the interests of space, and also only two number values, namely singular and plural, will be included in the discussion. While it is commonly used in linguistics, the term ‘plural’ is found to be problematic—in the context of person marking—in at least three ways. First, in so far as the first and second person are concerned, the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

term does not seem to reflect the linguistic reality of person marking. The first person singular refers to one speaker. Thus, the first person plural should in turn refer to more than one speaker. It is possible for more than one person to speak together at the same time (e.g. a group of people chanting in unison at a concert or a religious or political meeting), but this situation is far from normal. Generally, no more than one person speaks at a given time. Rather, the first person plural person marker is highly likely to refer to the speaker and one or more people with whom the speaker associates herself or himself. Indeed, there does not seem to be any language that has a grammaticalized morpheme to refer to mass speaking (Cysouw : ). A similar comment can also be made about the second person plural. While it is possible to speak to more than one addressee at the same time (e.g. a tour guide speaking to a group of tourists), the second person plural also does not seem to exist as a grammaticalized category in the world’s languages (Cysouw : ). Thus, the more likely interpretation of the second person plural marker may be the addressee and one or more persons with whom the addressee associates herself or himself. Second, plural marking morphology used in conjunction with nominals tends not to be used in conjunction with person markers. In fact, the use of nominal plurality morphology for person marking is said to be a rarity in the world’s languages (Cysouw : ). Even in rare languages where such extended use is attested, it tends to be confined to part of a paradigm or to be used only optionally. Finally, there is also an important variation on the first person plural: the distinction between inclusive and exclusive. In English, the second person plural we does not distinguish the combination of the speaker and the addressee and that of the speaker and someone other than the addressee. Thus, the person marker we in () can mean either the speaker and the addressee (a) or the speaker and someone other than the addressee (b). () English (Germanic; Indo-European: UK) We are going to be together forever. () English (Germanic; Indo-European: UK) a. We are going to be together forever, you and me. b. We are going to be together forever, Zonika and me. The first person plural reference that includes the addressee (a) is known as inclusive and the first person plural reference that does not 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

include the addressee (b) as exclusive. This person-marking distinction is commonly attested in the Pacific—especially in Austronesian and non-Pama-Nyungan languages—but uncommon in Africa and Eurasia (Cysouw a, b). Useful as it is, the inclusive–exclusive distinction is confined to the first person, and irrelevant to the other two persons. In other words, the distinction in question needs to be invoked in relation to only one of the three persons. Theoretically speaking, this may be rather costly or in the words of Cysouw (: ) ‘a waste of apparatus’. For the foregoing three reasons, Cysouw (: –) adopts the term ‘group’ in preference to the traditional term ‘plural’. In other words, the first person plural is to be known as the first person group, which involves the speaker, and either the addressee (inclusive) or someone other than the addressee (exclusive). The second person plural is to be called the second person group, which involves the addressee and someone other than the speaker. The third person plural involves people or entities other than the speaker and the addressee, that is, more than one third-party person or entity. The third person plural is the true plural equivalent of the third person singular. But for the sake of consistency, the third person group will be used instead of the third person plural. To wit, the term ‘group’ represents a change of perspective from number (i.e. one vs more than one) to kind (i.e. different combinations of the speaker, the addressee, and the rest). Traditionally, when person marking is examined in conjunction with number, six person/number values are recognized, namely: ()

Six-part person marking paradigmatic structure First Person Singular

First Person Plural

Second Person Singular Second Person Plural Third Person Singular

Third Person Plural

The paradigmatic structure in () is illustrated by the English person markers in (), albeit with homophony in the second person. While this paradigmatic structure is common, it certainly is not the only common one. There are other common paradigmatic structures in the world’s languages. For instance, the inclusive–exclusive distinction is one of the properties attested commonly in person marking. Adopting the concept of group instead of plural, we can think of plurality as different 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

combinations of person values. This way, we can think of seven group values. These seven group values and the three singular values (i.e. first person singular, second person singular, and third person singular) give us ten possible person/number values, as enumerated in (). Note  = First Person,  = Second Person, and  = Third Person. ()

Group +; +; +; ++ +; + +

Singular   

For reasons already explained, we can rule out + (mass speaking) and + (present audience) as ungrammaticalized values, leaving the eight values (appearing in bold face in ()) under scrutiny. The three singular values (also appearing in bold face in ()), i.e. , , and , do not need any explanation. Nor do two group values + (the addressee and his/her associates) and + (the rest). The combination + is the first person group including the addressee (i.e. inclusive), while the combination + is the first person group excluding the addressee (i.e. exclusive). The combination ++ is the first person group, including not only the addressee but also the rest. Since it involves the addressee, the combination ++ is also a type of the first person inclusive. The combination + is referred to as minimal inclusive, and the combination ++ as augmented inclusive. Following Cysouw (: ), we can now construct a paradigmatic-structure model, as in:

Group

()

1+2 Singular

minimal inclusive

inclusive

1+2+3 augmented inclusive

Speaker

1

1+3

exclusive

Addressee

2

2+3

second person plural

Other

3

3+3

third person plural

Different paradigmatic structures arise from the way the eight person/number values are organized differently within as well as across languages. For a given paradigmatic structure, if two or more values are 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

linked with each other by appearing in a single block, it means that they are related to each other by way of homophony. There are three different kinds of homophony: singular homophony, vertical homophony, and horizontal homophony. Singular homophony involves singular values, namely , , and . Vertical homophony involves homophony between group values, namely +, ++, +, +, and +. Horizontal homophony is between singular and group values, e.g. homophony between  and +. These three kinds of homophony are illustrated diagrammatically in: ()

1+2 1+2+3

Singular homophony

1

1+3

2 3

2+3

Vertical homophony

3+3

Horizontal homophony Person/number values in a homophonous block may not be contiguous. In such a case, homophonous values are linked together by means of a narrow connecting corridor, as in the first person singular and third person singular imperfect suffix (i.e. -Ø) in Spanish: ()

1+2 -mos

1+2+3

1



2

-s

-is

2+3

3



-n

3+3

1+3

In (), the homophony between the first person minimal inclusive, augmented inclusive, and exclusive is contiguous (hence the linkage via a single rectangular block), while that between the first person singular 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

and the third person singular is non-contiguous (hence the linkage via a narrow corridor). Cysouw () identifies as many as fifty-eight different paradigmatic structures, involving singular and group person markers, in his variety sample of over  languages. Out of these fifty-eight structures, eight are common, five are semi-common, and the remainder (fortyfive) are rare. This kind of lopsidedness in frequency of occurrence points to some interesting generalizations to be uncovered and then explained. Cysouw (: ) defines the differences between common, semi-common, and rare in the following way. A paradigmatic structure is regarded as common if it is attested widely in the world’s languages, and should be typical in at least two genealogical families. A paradigmatic structure is taken to be rare if it is attested as one or two unrelated examples or eventually a few closely related examples. A semi-common paradigmatic structure may occur in more than five genealogically and areally independent cases and should be commonly attested in one genealogical family. We do not have space to look at each of the fiftyeight paradigmatic structures. We will thus focus on some of them, selected from common, semi-common, and rare paradigmatic structures, and generalizations emerging from their frequencies of occurrence. Following Cysouw (), we will first present a brief summary of singular person marking and then examine the selected paradigmatic structures, in four groupings, from the perspective of group marking. As will be shown, the presence or absence of the inclusive/exclusive distinction is an important parameter in person marking, especially in terms of homophony. Thus, paradigmatic structures can be grouped together, based on the presence or absence of the distinction in question: paradigmatic structures with the inclusive/exclusive distinction and paradigmatic structures without the distinction. Within each of these two types of paradigmatic structures, there is a further division between non-homophonous or split group marking, and homophonous group marking. In other words, there are four groupings to be discussed: () Grouping A: no inclusive/exclusive with split group marking Grouping B: no inclusive/exclusive with homophonous group marking Grouping C: inclusive/exclusive with split group marking Grouping D: inclusive/exclusive with homophonous group marking 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

First of all, the most frequent paradigmatic structure of singular person marking is the three-way distinction between the first, second, and third person. So much so that singular homophony may be ‘too rare a phenomenon to reach a noticeable frequency in a strict typological sample’ (Cysouw : ). Moreover, the majority of paradigmatic structures with singular homophony are attested in paradigmatic structures with vertical homophony as well. In particular, singular homophony is completely absent from paradigmatic structures with the inclusive/exclusive distinction. Too rare a phenomenon it may be, there are paradigmatic structures with singular homophony. Among the rare paradigmatic structures with singular homophony, the most common opposition is between the first person on the one hand and the homophonous second and third person on the other (i.e. ≠= or speaker vs non-speaker), as illustrated by the dependent marking (i.e. verbal agreement in the present tense) in Dutch: ()

Dutch (Germanic; Indo-European: Netherlands) a. ik loop-Ø SG.PRON walk-SG ‘I walk.’ b. jij loop-t SG.PRON walk-/SG ‘You walk.’ c. hij/zij/het loop-t SG.PRON walk-/SG ‘S/he walks.’

()

1



2 3

-t

The Dutch examples in (), as schematized in (), are also emblematic of a strong cross-linguistic tendency involving singular homophony: singular homophony is almost always found in inflectional paradigms (i.e. bound morphemes or, generally, dependent person marking). Conversely, independent person marking prefers to have a 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

three-way distinction between the first, second, and third person. The rarity of singular homophony, together with its strong tendency to cooccur with vertical homophony, makes it useful to survey paradigmatic structures in the world’s languages, from the perspective of group marking, as enumerated in ().

14.3.1 Grouping A: no inclusive/exclusive with split group marking In this grouping, paradigmatic structures do not make a distinction between inclusive and exclusive but do make a distinction between the three persons in group or non-singular marking. In other words, +, ++, and + are all expressed by one and the same morpheme or what Cysouw (: –) calls ‘unified-we’, and the unified first person group marker contrasts with the other two person groups. Cysouw identifies fifteen paradigmatic structures in his sample, four common and eleven rare structures. We will look at the four common structures and two of the rare ones. One of the four common paradigmatic structures in the present grouping is the traditional six-way structure, as in the Latin inflectional ending in the present indicative: () Latin (Italic; Indo-European)

1+2 -mus

1+2+3

1

-o

2

-s

-tis

2+3

3

-t

-unt

3+3

1+3

The paradigmatic structure exemplified in () is found in Uralic, Mande, Nilotic languages, and Turkish (Turkic; Altaic: Turkey). It is also widely attested in Papuan languages from New Guinea. While not as common as elsewhere, it is also found in the American languages, e.g. Ika (Arhuacic; Chibchan: Columbia) and Epena Pedee (Choco: Columbia). Another common paradigmatic structure is found in Sinhala (Indic; Indo-European: Sri Lanka), North American languages, languages from New Guinea such as Sentani (isolate: Indonesia), and Asmat 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

(Asmat-Kamoro; Trans-New Guinea: Indonesia). An example comes from the patient prefixes in Chickasaw (Muskogean: US), as in: ()

Chickasaw (Muskogean: US) 1+2 po-

1+2+3

1

sa-

1+3

2

chi-

hachchi- 2+3 Ø-

3

3+3

The difference between the Latin-type () and the Chickasaw-type paradigmatic structure () is that, in the latter, there is horizontal homophony between the third person singular and group marking. The third common paradigmatic structure has even more horizontal homophony, that is, not just homophony between the third person singular and group but also between the second person singular and group, as illustrated by the independent person pronouns in Berik. ()

Berik (Tor; Tor-Orya: Indonesia) 1+2 ne 1

ai

1+2+3 1+3

2

aame

2+3

3

je

3+3

Languages with the Berik-type paradigmatic structure include Chulupi (Matacoan: Paraguay), Coahuilteco (Coahuiltecan: Mexico), Georgian (Kartvelian: Georgia), Jéi (Morehead-Upper Maro; South-Central Papuan: Indonesia), Kuman (Chimbu; Trans-New Guinea: Papua New Guinea), Lango (Nilotic; Eastern Sudanic: Uganda), Mbay (Bongo-Bagirmi; Central Sudanic: Chad), Eastern Nilotic languages, Gé languages, and Siouan languages. The last common paradigmatic structure has even more horizontal homophony, that is, between the first person singular and group as well, as exemplified by the intransitive subject prefixes in Maricopa: 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

() Maricopa (Yuman; Hokan: US)

1

ʔ-

1+2 1+2+3 1+3

2

m-

2+3

3

Ø-

3+3

The paradigmatic structure in Maricopa is found in all other Yuman languages. In fact, it is generally attested among the languages of America. Outside these languages, it hardly seems to be in use for dependent person marking. In the case of independent person marking, however, the Maricopa paradigmatic structure is said to be fairly widespread. Languages with the Maricopa-type structure for independent person marking include Páez (Paezan: Columbia), Salt-Yui (Chimbu; Trans-New Guinea: Papua New Guinea), and South-East Asian languages. There are eleven rare paradigmatic structures in Grouping A. One such structure has already been mentioned, namely the independent person marking in English (), which is represented in a paradigmatic template, as in: () English (Germanic; Indo-European: UK)

1+2 we 1

I

3

1+3 you

2 he/she/it

1+2+3

2+3 they

3+3

This rare paradigmatic structure is also found in Xokleng (Ge-Kaingang; Macro-Ge: Brazil), and in varieties of South American Spanish, in which the second person plural vos has displaced the second person singular tu. Classical Ainu illustrates another rare paradigmatic structure, an exact opposite of () in that there is horizontal homophony between first person singular and group as well as between the third person 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

singular and group, while there is a distinction between the second person singular and group, as in: ()

Classical Ainu (isolate: Japan) 1+2 1+2+3 a-

1 2

e-

1+3 eci-

Ø-

3

2+3 3+3

14.3.2 Grouping B: no inclusive/exclusive with homophonous group marking There are four semi-common and twenty rare paradigmatic structures in this grouping. These structures all involve vertical homophony, and we will look at two of the semi-common and two of the rare ones. One of the semi-common structures contains vertical homophony between the first person group and the second person group. In other words, there is an opposition between the speech act participants (i.e. speaker and addressee) and the rest. This structure is exemplified by independent person marking in the Athapaskan languages. For instance, Slave exhibits the paradigmatic structure in question. ()

Slave (Athapaskan; Ne-Dene: Canada) 1+2 1+2+3 1



2



3

ʔedį

naxį

1+3 2+3

ʔegedį

3+3

Languages that have the same paradigmatic structure include Awa (Gadsup-Auyana-Awa; Trans-New Guinea: Papua New Guinea), the Fehan dialect of Tetun (Central-Malayo-Polynesian; Austronesian: 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

East Timor), the Tommo-so variety of Dogon (Volta-Congo; NigerCongo: Mali), and Fongbe (Kwa; Niger-Congo: Benin). One of the two rare structures to be discussed in this section is a variant of the paradigmatic structure in (), exemplified by the past simple suffixes in: () Waskia (Madang; Trans-New Guinea: Papua New Guinea)

1+2 1+2+3

1 2 3

-man -em -am

1+3 2+3

-un

3+3

Note that in () the homophony between the first person group and the second person group is mirrored by the homophony between the first person singular and the second person singular. Another semi-common example of Grouping B involves vertical homophony between the second person group and the third person group, giving rise to an opposition between ‘we’ and ‘non-we’, as it were. Languages such as Kati (Ok; Trans-New Guinea: Indonesia), Mauritian Creole (Mauritius), Warekena (Inland Northern Arawakan; Arawakan: Brazil, Columbia, and Venezuela), and Wolof (Northern Atlantic; Niger-Congo: Gambia and Senegal) have this kind of paradigmatic structure, illustrated here by Nez Perce: () Nez Perce (Sahaptian; Penutian: US)

1+2 núun

1+2+3

1

’íin

1+3

2

’íim

2+3

3

’ipí

’imé

3+3

A rare paradigmatic structure in Grouping B is a variation of the structure in () in that not only is there vertical homophony between 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

the second person group and the third person group, but there is also singular and horizontal homophony involving the second and third person. This rare structure is evident in Chukotko-Kamchatkan languages, illustrated by the intransitive suffixes in Koryak. ()

Koryak (Northern Chukotko-Kamchatkan; ChukotkoKamchatkan: Russia) 1+2 1

mәtt-

2 3

1+2+3 1+3 2+3

Ø-

3+3

14.3.3 Grouping C: inclusive/exclusive with split group marking Cysouw () identifies seven paradigmatic structures in this grouping, four common, one semi-common, and two rare. We will examine one each. As noted earlier, singular homophony is unattested in this grouping. One relatively frequently attested structure in the world’s languages is characterized by the three-way distinction within the first person group: minimal inclusive vs augmented inclusive vs exclusive. Maranao provides an example of this structure: ()

Maranao (Greater Central Philippine; Austronesian: Philippines) ta

1+2

tano

1+2+3

1 ako

kami

1+3

2 ka

kano

2+3

3 sekanian

siran

3+3

This fully partitioned paradigmatic structure is attested very commonly in the Philippine languages, and also frequently in the non-Pama-Nyungan languages from Australia. In addition, it is also found in Africa, most of the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

examples coming from Niger-Congo and Chadic languages spoken in Cameroon and Nigeria. The structure in question does not seem to be common among North and South American languages. The only semi-structure in Grouping C is illustrated by the subject suffixes in Kwakiutl: () Kwakiutl (Northern Wakashan; Wakashan: Canada) 1+2 -ɛnts 1+2+3 1

-ɛnusx̥u

-ɛn(L)

1+3

2

-ɛs

2+3

3



3+3

In the paradigmatic structure in (), there is an opposition in the first person group between inclusive and exclusive, and there is horizontal homophony in the second person as well as in third person. This structure is also attested in Acehnese (Malayo-Sumbawan; Austronesian: Indonesia), Apalaí (Cariban: Brazil), Karo Batak (North-West Sumatra-Barrier Islands; Austronesian: Indonesia), Maxakalí (Macro-Ge: Brazil), Mekeo (Oceanic; Austronesian: Papua New Guinea), Palauan (Austronesian: Palau), and Svan (Kartvelian: Georgia). An Australian language called Warrwa exhibits the two rare paradigmatic structures in Grouping C. One of these rare structures, attested in the actor prefixes used in the non-future tense, is illustrated in: () Warrwa (Nyulnyulan: Australia)

1+2 ya1+2+3 1

ka-

1+3

2

wa-

2+3

3

Ø-

ku-

3+3 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

In (), there is horizontal homophony between the first person singular and exclusive as well as between the second person singular and group. There is, however, a distinction between inclusive and exclusive as well as between the third person singular and group. 14.3.4 Grouping D: inclusive/exclusive with homophonous group marking This grouping has only rare structures, twelve of them in total. Generally, there is a strong dispreference for homophony. Otherwise, we would expect common and/or semi-common structures to exist in this grouping. As in the case of Grouping C, singular homophony is also unattested in Grouping D. Two of the rare structures are: ()

Tiwi (Tiwian: Australia) 1+2 mani-

1+2+3

1 mәni-

mәwәni-

1+3

2 mәn̪ i-

mani-

2+3

wәni-

3+3

3

Ø-

The person markers in () are object prefixes, with the inclusive/ exclusive distinction but with vertical homophony between the first person inclusive and the second person group. Northern Pauite is a language with another rare paradigmatic structure, with vertical homophony between the second and third person group object pronouns, as in: ()

Northern Pauite (Numic; Uto-Aztecan: US)

1

i

2

i

3

ta

1+2

ti

1+2+3

ni

1+3 2+3

imi pi/u

3+3 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

Recall that in Grouping B, with no inclusive/exclusive distinction, vertical homophony between the second and third person group marking is relatively common (e.g. Nez Perce in ()). But the same homophony is rarely attested in Grouping D, with the inclusive/exclusive distinction. 14.3.5 Structural dependencies in paradigmatic structure Intuitively speaking, the more distinctions a system has, the more variation it may have. This makes sense because more distinctions (e.g. twelve contrasting vowels) create more ‘room’ for variation to show up than fewer distinctions (e.g. three contrasting vowels) do. However, when we look at the actual frequency distribution of common, semi-common, and rare paradigmatic structures, as reported in Cysouw (: ), nothing is further from what the intuition tells us, as can be seen from Table .. Table 14.3 Frequencies of paradigmatic structures attested No inclusive/exclusive Grouping A Grouping B

With inclusive/exclusive Grouping C Grouping D

Common









Semi-common









Rare









Total









Recall that Groupings A and B lack the inclusive/exclusive distinction while Groupings C and D maintain the distinction. In other words, Groupings C and D have more distinctions than Groupings A and B. From Table ., it is clear that there are not only more paradigmatic structures on the side of no inclusive/exclusive distinction but also there are far more rare paradigmatic structures attested on the same side.4 4 Note that there are five more rare structures involving combinations of +, ++, and + that have not been discussed in this section. In other words, there are five additional rare paradigmatic structures—e.g. + = + 6¼ ++—to join Groupings C and D, as opposed to Groupings A and B. This additional information does not have a bearing on the point being made here, however.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

Also, Grouping B has as many as four semi-common paradigmatic structures involving homophonous group marking, while Grouping D has only rare structures involving homophonous group marking. Thus, there is more variation, both quantitative and qualitative, in paradigmatic structures without the inclusive/exclusive distinction than those with the distinction. Moreover, paradigms with a further distinction between minimal and augmented inclusive (e.g. Maranao in ()) show even less or, actually, little variation. Thus, the crosslinguistic reality is that ‘the more categories [i.e. values] are distinguished in a paradigm, the less paradigmatic variation exists’ (Cysouw : ). To put it differently, the more explicit person marking is, the less likely it is for a paradigm to conflate different person making values. Moreover, vertical and singular homophony are almost always attested in paradigms without the inclusive/exclusive distinction. In fact, vertical and singular homophony hardly turn up in paradigms with the inclusive/exclusive distinction. In addition, as already pointed out, singular homophony is almost always found in paradigms that display vertical homophony as well. Cysouw (: ) points out that singular homophony without vertical homophony is ‘only attested incidentally in a few European languages’. Cysouw (: ) invokes the gradual notion of ‘pure person’ to capture the extent to which the speaker and the addressee are distinguished explicitly in person-marking paradigms, and ultimately to explain the structural dependencies discussed above.5 For instance, paradigms with the unified-we structure make no distinction whatsoever among the first person group values (i.e. + = ++ = +). Thus, the reference to the addressee is not taken to be important enough to be made explicit because a unified first person group marker (e.g. English we in ()) does not indicate, in particular, whether the addressee is included or not in its reference. In contrast, paradigms with the inclusive/exclusive distinction make a distinction between + & ++ and + (i.e. + = ++ 6¼ +). In such paradigms, the reference to the speaker and the addressee (or to the speaker, the

5 Needless to say, these structural dependencies will need to be tested further on the basis of a statistically robust random sample, as Cysouw’s sample is more of a variety kind (see §.).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 3 P A R A D I G M A T I C S T R U C T U R E : T O W A R D S A T Y P O L O G Y

addressee, and the rest) contrasts with the reference to the speaker without the addressee. Thus, not only the reference to the speaker but also the reference to the addressee is ‘kept “pure”’ (Cysouw : ). In the case of paradigms with the minimal/augmented inclusive distinction, the degree of explicitness in marking the reference to the speaker and addressee is even greater, as the reference to the speaker and addressee contrasts with the reference to the speech act participants and the rest grouped together. Moreover, Cysouw argues that the lack of a distinction between the inclusive and exclusive in paradigms with the unified-we structure leads to the conflating of other person values. This is why vertical and singular homophony are almost always attested in paradigms without the inclusive/exclusive distinction, whereas they are hardly found in paradigms with the distinction. Cysouw (: ) pulls all these different structural dependencies together, proposing what he calls the Explicitness Hierarchy. () Explicitness Hierarchy singular > vertical > unified-we > inclusive/ > minimal/augmented homophony homophony exclusive inclusive

The Explicitness Hierarchy in () indicates that the more to the right, the more explicitly the person category is marked. Lastly, Cysouw (: –) also makes an interesting observation as to the extent of horizontal homophony. This is captured by a hierarchy that comes in two versions, one for paradigmatic structures with the inclusive/exclusive distinction and the other for those without the distinction: () a. Horizontal Homophony Hierarchy I (with inclusive/exclusive) no homophony < third < second < exclusive b. Horizontal Homophony Hierarchy II (no inclusive/exclusive) no homophony < third < second < first These horizontal homophony hierarchies should be interpreted in the following way. First, either there is no horizontal homophony or there is. If there is one instance of horizontal homophony, it will be between the third person singular and group marking. If there is one more instance of horizontal homophony, it will be between the second person singular and group marking—in addition to the horizontal homophony in the third person. If there is yet another instance of 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

horizontal homophony, it will be between the first person singular and either the first person exclusive (in the presence of the inclusive/exclusive distinction) or the unified-we (in the absence of the inclusive/ exclusive distinction)—in addition to the horizontal homophony in the second and third person. In Cysouw’s sample, other kinds of horizontal homophony are almost unattested in the case of (a), and only rare permutations exist in the case of (b). The two horizontal homophony hierarchies can now be grafted on to the Explicitness Hierarchy, as in (). () Explicitness Hierarchy in conjunction with Horizontal Homophony Hierarchy no homophony < third < second < exclusive

singular > vertical > unified-we > inclusive/ > minimal/augmented homophony homophony exclusive inclusive

no homophony < third < second < first

Thus, when the first person group marking does not make a distinction between the inclusive and exclusive, horizontal homophony may also take place initially between the third person singular and group, then between the second person singular and group, and finally between the first person singular and group. Similarly, when the first person group marking makes a distinction between the inclusive and exclusive, horizontal homophony may also take place initially between the third person singular and group, then between the second person singular and group, and finally between the first person singular and the first person exclusive. Moreover, the overall hierarchy captures a number of implications. For there to be a distinction between the minimal and augmented inclusive, there should already be a distinction between the inclusive and exclusive; for there to be vertical homophony, there should be no distinction between the inclusive and exclusive, that is, unified-we in use; and for there to be singular homophony, there should already be vertical homophony. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

14 . 4 C O N C L U D I N G R E M A R K S

14.4 Concluding remarks We have surveyed the different morphological forms that person markers assume in the world’s languages. These forms are either independent or dependent. Dependent person markers can be weak, clitics, bound, or zero. Moreover, we have surveyed some of the common, semi-common, and rare paradigmatic structures. In so doing, we have discussed a number of structural dependencies that emerge from a wide range of paradigmatic structures attested in the world’s languages. Cysouw () has made an attempt to explain these dependencies by invoking the concept of ‘pure person’. In particular, the Explicitness Hierarchy has been discussed, in conjunction with the Horizontal Homophony Hierarchy (two versions thereof), as a theoretical model that captures the various structural dependencies attested in the rich variety of person-marking paradigmatic structures.

Study questions 1. Choose two or more languages other than English that you have access to (e.g. grammatical descriptions in your university library or international students in your class) and ascertain what morphological forms of person marking (independent vs dependent; different types of dependent person marking, i.e. weak forms, clitics, bound forms, and zero forms) are used in these languages. Also fill out the paradigmatic structure depicted in () with the person markers that you have identified for each of the personmarking paradigms in use. 2. Consider the independent pronouns from Usarufa (Eastern Highlands; Trans-New Guinea: Papua New Guinea) and construct a paradigmaticstructure model (as depicted in ()) of these independent pronouns. Discuss what kind(s) of homophony exist(s) in Usarufa independent person marking. Table 14.4 Independent pronouns in Usarufa Singular

Plural



ke

ke



e

vke



we

ye



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

PERSON MARKING

Which of the four groupings in (), A, B, C, or D, does the independent person-marking system in Usarufa belong to? 3. Consider the following verbal inflections across different verb forms in Spanish (Romance; Indo-European: Spain) and discuss the similarities and differences between the various person/number paradigms evident in the verbal inflections, using the paradigmatic-structure model in (). The basic verb exemplified here is vivir ‘to live’. Table 14.5 Verb paradigms in Spanish

Sg

Present Pl

Past Sg

Future Pl

Sg

Pl

Sg

Imperfect Pl



vivo

vivimos

viví

vivimos

viviré

viviremos

vivía

vivíamos



vives

vivís

viviste

vivisteis

vivirás

viviréis

vivías

vivíais



vive

viven

vivió

vivieron

vivirá

vivirán

vivía

vivían

Present Subjunctive Sg Pl

Imperfect Subjunctive Sg Pl

Sg

Conditional Pl



viva

vivamos

viviera

viviéramos

viviría

viviríamos



vivas

viváis

vivieras

vivierais

vivirías

viviríais



viva

vivan

viviera

vivieran

viviría

vivirían

4. Consider how number is coded in person marking in Chalcatongo Mixtec (Mixtecan; Oto-Manguean: Mexico) and Pipil (Aztecan; Uto-Aztecan: El Salvador).

Chalcatongo Mixtec: Number marking can be done in a number of ways, including the addition of the prefix ka- to the verb and the use of the independent plural word xináʔa. Sometimes, the prefix and the independent word can be used together. Number marking is optional. Pipil: The plural number suffix -t is obligatory with non-singular or group person markers. This suffix is used exclusively with person markers and does not mark the plurality of NPs.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

FURTHER READING

On the basis of the foregoing, Siewierska (: –) concludes that it would be difficult to argue number as being part of the person-marking paradigm in Chalcatongo Mixtec, while it is reasonable to regard the number suffix as part of the person-marking paradigm in Pipil. Do you agree with her analysis? Explain why you agree or disagree. Further reading Bhat, D. N. S. (). Pronouns. Oxford: Oxford University Press. Cysouw, M. (). The Paradigmatic Structure of Person Marking. Oxford: Oxford University Press. Cysouw, M. (a). ‘Inclusive/Exclusive Distinction in Independent Pronouns’, in M. S. Dryer and M. Haspelmath (eds.), The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Available at http://wals.info/chapter/. Cysouw, M. (b). ‘Inclusive/Exclusive Distinction in Verbal Inflection’, in M. S. Dryer and M. Haspelmath (eds.), The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Available at http://wals.info/chapter/. Siewierska, A. (). Person. Cambridge: Cambridge University Press. Siewierska, A. (). ‘Person Marking’, in J. J. Song (ed.), The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press, –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 Evidentiality marking

15.1 Introduction



15.2 Morphological form of evidentiality marking



15.3 Semantic parameters of evidentiality



15.4 Typology of evidentiality systems



15.5 Evidentiality and other grammatical categories



15.6 Concluding remarks



15.1 Introduction In English, if someone utters (or writes) (), there is no indication in the sentence itself as to how s/he has acquired the information expressed therein. () Joseph played football. All that is expressed in () is that someone known as Joseph engaged in a team sport called football. Thus, we are unable to tell whether the speaker herself/himself saw or heard Joseph play football. Nor are we able to say whether someone gave the speaker the information in question or whether the speaker inferred it from other information or evidence. If and when asked to indicate how s/he has acquired the information, e.g. when challenged by the addressee (How do you know?) or when instructed as a witness in a courtroom (e.g. Can you tell the court how it is that you acquired this information?), the speaker can choose to reveal the source of the information by appending additional expressions to (), as in:

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 1 I N T R O D U C T I O N

()

a. Joseph played football, I saw it. b. Joseph played football, I heard it. c. Joseph played football, as his football boots are gone, while his shoes are left behind in his locker. d. Joseph played football, because that’s what he always does on Saturday mornings. e. Joseph played football, his mother told me.

As illustrated by () and (), English does not mark the source of information grammatically. In other words, (), as it stands, is a fully grammatical and totally acceptable sentence. In contrast, about one quarter of the world’s languages—North and South American, Caucasian, and Tibeto-Burman languages in particular—are said to mark the source of information explicitly (Aikhenvald : , ); the speaker must indicate how s/he has acquired the information encoded in each utterance that s/he makes. Put differently, in such languages, sentences without an explicit expression of information source are ungrammatical and extremely unnatural. In Tariana, for instance, it is not sufficient to say something equivalent to (). In fact, Tariana’s equivalent to () would be an ungrammatical sentence. Depending on the source of information, the Tariana speaker must instead add one of the five different suffixes to the verb, as exemplified in (). ()

Tariana (Inland Northern Arawakan; Arawakan: Brazil) a. Juse iɾida di-manika-ka José football SG.NF-play-REC.P.VIS ‘José has played football.’ (we saw it) b. Juse iɾida di-manika-mahka José football SG.NF-play-REC.P.NONVIS ‘José has played football.’ (we heard it) c. Juse iɾida di-manika-nihka José football SG.NF-play-REC.P.INFR ‘José has played football.’ (we infer it from visual evidence) d. Juse iɾida di-manika-sika José football SG.NF-play-REC.P.ASSUM ‘José has played football.’ (we assume this on the basis of what we already know) e. Juse iɾida di-manika-pidaka José football SG.NF-play-REC.P.REP ‘José has played football.’ (we were told) 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

These suffixes are known as evidentials or evidentiality markers. In (a), the suffix -ka indicates that the information expressed in the sentence is based on the speaker’s own visual perception. In (b), the suffix -mahka identifies the speaker’s own non-visual or auditory perception as the source of information. In (c), -nihka marks the speaker’s inference, based on physical evidence, as the source of information. In (d), the speaker’s general knowledge about José’s routines is identified as the source of information, as indicated by the presence of the suffix -sika in the verb. In (e), the speaker reveals, by means of the suffix -pidaka, that s/he acquired the information from a third party (i.e. hearsay evidence). Evidentiality marking, in languages such as Tariana, is as much a grammatical requirement as case or tense marking is in many languages of the world. It was in the beginning of the twentieth century—after over three hundred years of misanalysis—that evidentiality was properly recognized for what it is, i.e. a grammatical marker of information source (Aikhenvald : ). However, it was not until the mid-s that evidentiality began to attract serious attention and it has since become one of the most intensely researched grammatical properties (e.g. Chafe and Nichols ; Willett ; de Haan ; Rooryck a, b; Aikhenvald and Dixon ; Speas a, b, ; Brugman and Macaulay ). The present chapter concerns evidentiality marking, and is based primarily on the largest cross-linguistic study of evidentiality, namely Aikhenvald (). The data used in that study come from ‘over  languages from all parts of the world’ (Aikhenvald : ).1 15.2 Morphological form of evidentiality marking Evidentiality marking comes in a wide range of morphological expressions, including affixes, clitics, particles, and auxiliary verbs. The examples from Tariana in () contain (verbal) evidential suffixes, for instance. They also illustrate that evidentiality marking is often fused with tense marking. For instance, in (a), the visual evidentiality marker -ka carries the grammatical meaning of recent past as well. Whether the -plus languages constitute any kind of sample is not clear, as Aikhenvald does not say how she selected them. In view of this, it may be safe to conclude that Aikhenvald’s study is based on a convenience or variety sample: that is, investigating only languages with evidentiality marking without any pre-sampling genealogical and areal considerations. 1



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 2 M O R P H O L O G I C A L F O R M O F E V I D E N T I A L I T Y M A R K I N G

Cherokee also makes use of evidential suffixes, again fused with tense marking, as illustrated in: ()

Cherokee (Southern Iroquoian; Iroquoian: US) a. wesa u-tlis-ʌʔi cat it-run-FIRST.PST ‘A cat ran.’ (I saw it running) b. u-gahnan-eʔi it-rain-NON.FIRST.PST ‘It rained.’ (I woke up, looked out, and saw puddles of water on the ground)

Evidentials in Cuzco Quechua are clitics or, more specifically, enclitics that attach to the first constituent in the sentence. The second-position clitic status of the evidential markers in Cuzco Quechua is illustrated in: ()

Cuzco Quechua (Quechuan: Peru) huk-si ka-sqa huk machucha-piwan payacha once-REP be-SD one old.man-with woman ‘Once there were an old man and an old woman.’

Evidential markers appear in the form of particles in Mparntwe Arrernte, as exemplified in: ()

Mparntwe Arrernte (Central Pama-Nyungan; Pama-Nyungan: Australia) Pmere arrule-rle kwele ne-ke; artwe nyente . . . camp long-ago REP be-PC man one ‘A long time ago, so they (the ancestors) say, there lives a man . . .’

Cora is another language with evidential particles, as illustrated in: ()

Cora (Corachol; Uto-Aztecan: Mexico) ayáa pú núʔu tú-huʔ-u-rɨh this SBJV QUOT DISTR-NARR-COMPL-do ‘This is, they say, what took place.’

Oksapmin expresses evidentiality (in the present case, auditory sensation) by means of an auxiliary verb hah ‘do’, as in: ()

Oksapmin (isolate: Papua New Guinea) barus apri-s hah-h plane come-SEQ do-IMM.PST ‘I hear the plane coming.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

Note that in () the auxiliary verb, in conjunction with a verb stem marked by a sequential marker, is used to indicate non-visual sensory evidence as the source of information.

15.3 Semantic parameters of evidentiality As illustrated in (), the evidentiality system in Tariana operates with five different distinctions of information source: visual (a), non-visual sensory (b), inference (c), assumption (d), and hearsay (e). Few languages have been found to have five or more evidentiality distinctions as clearly as Tariana has (Aikhenvald : ). To these five distinctions, Aikhenvald adds one more distinction, namely quotative (Aikhenvald : ). These six evidentiality distinctions are found to be recurrent in her language sample.2 The evidentiality distinctions are listed in Table ., along with their brief definitions (Aikhenvald : –). Table 15.1 Evidentiality distinctions VISUAL

information acquired through seeing

NON-VISUAL SENSORY

information acquired through hearing, and typically extended to smell and taste, and sometimes also to touch

INFERENCE

based on visible or tangible evidence or result

ASSUMPTION

based on evidence other than visible results; this may include logical reasoning, assumption, or simply general knowledge

HEARSAY

reported information with no reference to those who reported it

QUOTATIVE

reported information with an overt reference to the quoted source

15.3.1 Visual Visual evidentials are used to mark visually based sources of information (i.e. something seen by the speaker). Typically, visual evidentials do not extend to non-visual sensory perceptions such as hearing, smell, 2

There may be further evidentiality distinctions that need to be recognized, but this awaits further research. For instance, Plungian (: –) proposes an evidentiality distinction based on the difference between the speaker’s personal participation in a given situation and the speaker’s personal perception of a situation.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 3 S E M A N T I C P A R A M E T E R S O F E V I D E N T I A L I T Y

and taste, when there is already a non-visual sensory evidential in place. If, however, a language does not make a separate non-visual sensory distinction, visual evidentials may extend to other non-visual sensory perceptions. For instance, in Wanka Quechua the visual evidential suffix -mi can refer not only to what the speaker has seen (a) but also to what s/he has heard (b) or tasted (c). Note that in this kind of extension visual evidential is called direct evidential as it no longer refers to visual evidence only. ()

Wanka Quechua (Quechuan: Peru) a. ñawi-i-wan-mi lika-la-a eye--with-DIR.EV see-PST- ‘I saw (them) with my own eyes.’ b. ancha-p ancha-p-ña-m buulla-kta lula-n too.much-GEN too.much-GEN-now-DIR.EV noise-ACC make- kada tuta-m each night-DIR.EV ‘He really makes too much noise . . . every night.’ (I hear it) c. chay-chru lurin yaku-kuna-si llalla-ku-n-mi that-LOC Lurin water-PL-also be.salty-REF--DIR.EV ‘Even the water around Lurin is salty.’ (I taste it)

In contrast, Qiang has only one sensory evidential, namely visual. When the source of information is non-visual, e.g. hearing, visual evidential cannot be used. Inferred evidential must be used instead, as in: () Qiang (Qiangic; Sino-Tibetan: China) mi ʑbə ʑete-k person drum beat-INFR ‘Someone is playing drums.’ (it seems to me from hearing a noise that sounds like drums being played) Moreover, visual evidentials may extend even further to express full responsibility for the statement being made. In typical situations, visual evidentials mark information that the speaker has visual evidence for. Thus, the speaker is in a position to bear full responsibility for her/his statement as the veracity of her/his statement is based on what s/he sees. This sense of responsibility can be extended to situations where there is actually no evidence but the speaker still wants to take full responsibility for what s/he says (‘I didn’t see it but I vouch for it with 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

my life that it did happen’). In Tariana, for instance, the speaker makes a statement as if s/he had witnessed the event encoded in that statement, although s/he did not actually see it take place: ()

Tariana (Inland Northern Arawakan; Arawakan: Brazil) ma:tʃite phya-yana nu-enipe-nuku bad.NCL.ANIM you-PEJ SG-children-TOP.NON.A/S pi-hña-naka SG.eat-PRES.VIS ‘You bad one you eat my children.’

15.3.2 Non-visual sensory Non-visuals mark information acquired through non-visual sensory perceptions such as hearing, smelling, tasting, and feeling. Oksapmin, mentioned in §., expresses non-visual sensory sources of information by means of an auxiliary verb ‘do’. The sentence in (), repeated here, is an example of evidentiality based on hearing. () Oksapmin (isolate: Papua New Guinea) barus apri-s hah-h plane come-SEQ do-IMM.PST ‘I hear the plane coming.’ The same auxiliary verb is used to mark other non-visual sensory perceptions, i.e. smell (a) and feeling (b), as the basis of information source, as illustrated in: ()

Oksapmin (isolate: Papua New Guinea) a. imaah gapgwe na-ha-m pig good.smell to.me-do-SEQ hah-h-mur do-IMM.PST.SG.STATEMENT ‘Some pork is roasting (to me).’ (I smell it) b. gin sur oh mara-s hah now needle it come.in-SEQ do-IMM.PST ‘Now I feel the needle going in.’

In Tucano, non-visual sensory evidential can also refer to something that the speaker tastes (a), or feels physically (b) or emotionally (c). 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 3 S E M A N T I C P A R A M E T E R S O F E V I D E N T I A L I T Y

() Tucano (Tucanoan: Colombia and Brazil) a. ba’â-sehé akâ+yɨʼdɨa-sa’ eat-NOMN.INAN.PL salty+very-PRES.NONVIS.NON ‘The food is very salty.’ (I can sense it by taste) b. ãhú-pẽa bãdî-de dũ’dî-dã’ mosquito-PL we-TOP.NONA/S bite-SUB.M weé-sa-bã AUX-PRES.NONVIS-PL ‘Mosquitoes are biting us.’ (we can feel it) c. koô etâ-kã́ yɨʼî eʼkatí-asɨ she arrive-SUB I be.happy-REC.PST.NONVIS.NON ‘When she arrived, I felt happy.’ Non-visual sensory evidentials can also extend to accidental events or events that occur beyond the speaker’s control. In Tucano, the nonvisual sensory evidential suffix is used to denote accidental or uncontrollable actions, as illustrated in: () Tucano (Tucanoan: Colombia and Brazil) pūúgɨ-pɨ bɨdî-diha-’asɨ hammock-TOP.NONA/S fall-go.down-REC.P.NONVIS.NON ‘I fell out of a hammock.’ (without intending to: maybe while asleep) 15.3.3 Inference and assumption Inferred evidentials are based on visible or tangible evidence as the source of information. For instance, in (c), repeated here, it is the visible evidence of the missing football boots and the shoes left in José’s locker that enables the speaker to infer that he has played football. ()

Tariana (Inland Northern Arawakan; Arawakan: Brazil) c. Juse iɾida di-manika-nihka José football SG.NF-play-REC.P.INFR ‘José has played football.’ (we infer it from visual evidence)

In the inferred evidential, visual evidence may also include someone else’s psychological and physical states, e.g. emotions, thoughts, fatigue, when they manifest themselves overtly (e.g. an angry, pensive, or tired look on one’s face), although the speaker may not have direct access to such internal experiences. For instance, in Wanka Quechua the inferred 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

evidential is called for when others’ psychological or physical states are described, as in: ()

Wanka Quechua (Quechuan: Peru) pishipaa-shra-chr ka-ya-nki be.tired-ATTRIB.INFR be-IPFV- ‘(Sit here;) you must be tired.’

Inference can be made not only on the basis of visible evidence but also on the basis of reasoning, as illustrated in: ()

Nganasan (Samoyedic; Uralic: Russia) T’eliʔimidʼi-ʔə-ʔ baarbə-ƌuŋ huntə-ƌuŋ i-huaƌu brake-PERF-PL master-PL authority-PL be-INFR ‘They braked (following the master’s order); (one infers that) their master was an authority for them.’

There are also languages that make a finer distinction between inference and assumption, the former based on visible or physical evidence and the latter based on general knowledge. Tariana, cited in (), is one such a language. The sentences in (c) and (d), in particular, demonstrate nicely the distinction between inference and assumption. () Tariana (Inland Northern Arawakan; Arawakan: Brazil) c. Juse iɾida di-manika-nihka José football SG.NF-play-REC.P.INFR ‘José has played football.’ (we infer it from visual evidence) d. Juse iɾida di-manika-sika José football SG.NF-play-REC.P.ASSUM ‘José has played football.’ (we assume this on the basis of what we already know) In (c), the speaker draws the inference from physical evidence, while in (d), the speaker draws the same inference from her/his general knowledge about what José does, for instance, on Saturday mornings. When we draw an inference, the strength of that inference cannot be greater than the veracity of something that we actually saw. This difference may explain why in some languages inferred evidentials may extend to the expression of probability, doubt, uncertainty, and lack of personal responsibility. In Wanka Quechua, for instance, the inferred evidential may be used to express likelihood or probability, as in: 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 3 S E M A N T I C P A R A M E T E R S O F E V I D E N T I A L I T Y

() Wanka Quechua (Quechuan: Peru) aa tardi-man chraa-mu-n tardi-laa-chra yes late-GOAL arrive-TRANSLOC- late-yet-INFR ‘Yes, he will probably arrive later.’ 15.3.4 Hearsay and quotative While no languages are said to have two visual or two non-visual sensory evidentials, languages with two reported evidentiality distinctions are not uncommon (Aikhenvald : ). The two most frequently attested reported evidentiality distinctions are hearsay and quotative, and the difference between hearsay and quotative is commonly attested in North American Indian languages (Aikhenvald : ). Hearsay involves information that the speaker provides without specifying the exact source of the report, while quotative involves information containing an overt reference to the source of the quoted report (It is said that John Key will resign as Prime Minister on Monday vs Bill English said ‘John Key will resign as Prime Minister on Monday’). In Tariana, one and the same reported evidential can be used, as it does not make the finer distinction between hearsay and quotative. Thus, the sentence in (e), with reported evidential, is vague as to who actually provided the information in the first place. ()

Tariana (Inland Northern Arawakan; Arawakan: Brazil) e. Juse iɾida di-manika-pidaka José football SG.NF-play-REC.P.REP ‘José has played football.’ (we were told)

In contrast, Cora has distinct hearsay and quotative evidentiality markers, as illustrated in: () Cora (Corachol: Uto-Aztecan: Mexico) y-én peh yée wa-híhwa mwáa, here-top .SUBR QUOT COMPL-yell SG yáa pú nú’u hí tyí-r-aa-ta-hée PROCOMP SBJV REP SEQ DISTR-DISTR.SG-COMPL-PERF-tell ‘“From right up on top here, you will call out loud and clear”, that is what she called on him to do.’ In (), the quoted report itself is marked by the quotative evidential yée (so that it is clear who is the exact author of the quoted report), 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

while the sentence containing the quoted report is marked by the hearsay evidential nú’u (i.e. () is taken from a story, the author of which is left unspecified). The hearsay evidentiality marker is used on its own when the speaker is reporting something without indicating the exact source of the report, as in: ()

Cora (Corachol: Uto-Aztecan: Mexico) ayáa pá nú’u tyú-hu’-u-rí’̵ h thus SBJV HEARSAY DISTR-NARR-COMPL-do ‘This is, it is said, what took place.’

Since hearsay is based on what the speaker is told by someone else, there is an understanding that the speaker may not be held responsible for what is being reported. When challenged, the speaker will always have the option of pointing out that the information came from someone else. In other words, there are overtones of lack of reliability or even lack of veracity in the information provided in the statement. In some languages, indeed, use of hearsay evidentials may imply that the speaker does not vouch for the veracity of information. This is clearly manifested in the following sentence in Mparntwe Arrernte. ()

Mparntwe Arrernte (Central Pama-Nyungan; Pama-Nyungan: Australia) the kwele re-nhe twe-ke SG.A REP SG-ACC hit/kill-PST-COMPL ‘I am reported to have killed him (I didn’t).’

In (), the speaker is calling into question an accusation made by others, and s/he is indicating this by using the reported evidential particle kwele. Similarly, in Korean the hearsay evidential is used to express the speaker’s negative attitude to, or dismissal of, what is reported, as in: ()

Korean (isolate: Korea) caki-hakkyo-ka ipen-ey wusungha-keyss-t-ay. self-school-NOM this.time-LOC win-FUT-INT-HEARSAY wus-ki-nta, wus-ki-e laugh-CAUS-PLAIN.IND laugh-CAUS-INTIMATE.IND ‘Their school is going to win this time, they say. They’re making me laugh, they’re making me laugh.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 3 S E M A N T I C P A R A M E T E R S O F E V I D E N T I A L I T Y

15.3.5 Order of preference in evidentials So far, we have implicitly assumed that the speaker may have only one source of information at a time. In reality, however, this may not always be the case. The speaker may instead happen to have more than one source of information simultaneously. For instance, when the speaker witnessed José playing football (see ()), s/he not only saw him playing football but also heard him playing football. In other words, the speaker has both visual and non-visual sensory evidence for talking about José’s playing football. In such cases of multiple sources of information, the speaker has to choose one evidentiality marker. Cross-linguistically, it is always the case that visual evidence takes priority over all other types of evidence, e.g. visual over non-visual sensory. When there is no visual evidence but there is evidence from all other information sources, non-visual sensory evidence is preferred over the other types of evidence. This kind of preference among multiple sources of information can be captured in a hierarchy of preferred evidentials. For instance, Aikhenvald (: ) reports that Tariana operates a hierarchy of preferred evidentials, as in: () VISUAL < NON-VISUAL < INFERRED < REPORTED < ASSUMED The hierarchy shows that a visual report is preferred to a non-visual report. For instance, the statement ‘I saw him play football’ would be stronger or more reliable than the statement ‘I heard him play football’ if we were to ascertain whether José played football or not. The first statement indicates that the speaker not only saw but also heard him play football, whereas the second statement indicates that the speaker heard him play football without seeing him do so. Moreover, first-hand evidence—subsuming both visual and non-visual sensory—is preferred to inference. To begin with, inference cannot be as strong as first-hand evidence. Inference, which relies on visual evidence, is preferred to reported evidence, which in turn is preferred to assumed. Reported evidence is second-hand evidence; it is not something that the speaker has observed first-hand. The speaker is reporting something that someone else told her/him. The assumed evidential is used in the absence of any visual evidence, first-hand or second-hand, and must thus be based on general or previous knowledge (e.g. José must have played football 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

as that’s what he does on Saturday mornings). To wit, the hierarchy of preferred evidential may be interpreted in terms of factual strength of information (i.e. visual evidence is factually the strongest while assumption is factually the least strong). This is presented schematically in: ()

VISUAL < NON-VISUAL < INFERRED < REPORTED < ASSUMED factually strongest

factually least strong

15.4 Typology of evidentiality systems Aikhenvald’s () typology of evidentiality systems is based on the number of evidentiality distinctions grouped together into an evidentiality system. There are basically four types: two-term system; threeterm system; four-term system; and five or more term system. Each of these types has different subtypes, depending on what evidentiality distinctions (Table .) are assembled into an evidentiality system. Each n-term system and its subtypes will be described in this section, together with information on exemplifying languages. (It is not possible to give statistical data because Aikhenvald () describes frequencies by using non-numerical quantifying expressions such as ‘numerous’, ‘many’, and ‘a few’.) 15.4.1 Two-term systems Aikhenvald (: –) identifies as many as five two-term systems: (i) (ii) (iii) (iv) (v)

First-hand and Non-first-hand; Non-first-hand vs ‘Everything else’; Reported vs ‘Everything else’; Sensory and Reported; Auditory vs ‘Everything else’.

We will gloss over the last two systems because the typological status of (iv) is doubtful for it involves language obsolescence, and because (v) is found in only one (dying) language in her sample, namely Yuchi (isolate: US). The two-term system in (i) makes a distinction between first-hand and non-first-hand evidence. First-hand evidence is acquired through 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 4 T Y P O L O G Y O F E V I D E N T I A L I T Y S Y S T E M S

seeing, hearing, or other senses. Cherokee provides an example of this two-term system (also ()). () Cherokee (Southern Iroquoian; Iroquoian: US) a. uhyʌdla u-nolʌn-ʌʔi cold it-blow-FIRST.PST ‘A cold wind blew.’ (I felt the wind) b. u-wonis-eʔi he-speak-NONFIRST.PST ‘He spoke.’ (someone told me) The two-term system in (i) is attested in North and South American Indian languages, as well as in languages in Eurasia, e.g. Northeast Caucasian and Finno-Ugric. The two-term system in (ii) involves non-first-hand evidence, that is, evidence acquired through non-visual senses, hearsay, or inference. For instance, the Turkish non-first-hand evidential suffix marks hearsay (a), inference (b), and auditory perception (c). () Turkish (Turkic; Altaic: Turkey) a. bakan hasta-ymɪș minister sick-NONFIRST.COP ‘The minister is reportedly sick.’ (said by someone told about the sickness) b. uyu-muș-um sleep-NONFIRST.PST-SG ‘I have obviously slept.’ (said by someone who has just woken up) c. iyi çal-ɪyor-muș good play-INTRATERM.ASP-NONFIRST.COP ‘She is, as I hear, playing well.’ (said by someone listening to her play) The two-term system in (ii) is found in many other Turkic languages as well as in Kartvelian and Uralic languages. The two-term system in (iii) contrasts information acquired through hearsay and all other information. This two-term system is reported to be widespread in the world, especially in many Tibeto-Burman and South American languages, and in a number of North American languages. The system is also said to exist in a few African, Australian, and Western Austronesian languages. The two-term system in question is illustrated here by Estonian. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

()

Estonian (Finnic; Uralic; Estonia) a. Ta on aus mees he is honest Man ‘He is an honest man.’ b. Ta olevat aus mees he be.REP.PRES honest man ‘He is said to be an honest man.’

15.4.2 Three-term systems Aikhenvald (: –) identifies five three-term systems: (i) (ii) (iii) (iv) (v)

Direct (or Visual), Inferred, and Reported; Visual, Non-visual sensory, and Inferred; Visual, Non-visual sensory, and Reported; Non-visual sensory, Inferred, and Reported; Reported, Quotative, and ‘Everything else’.

In the first three-term system (i), direct can be either based on visual evidence or sensory evidence (usually visual or auditory). Wanka Quechua provides an example of this three-term system. ()

Wanka Quechua (Quechuan: Peru) a. Chay-chruu-mi achka wamla-pis walashr-pis this-LOC-DIR.EV many girl-too boy-too alma-ku-lkaa-ña bathe-REFL-IPFV.PL-NARR.PST ‘Many girls and boys were swimming.’ (I saw them) b. Daañu pawa-shra-si ka-ya-n-chr-ari field finish-PART-EVEN be-IPFV--INFR-EMPH ‘The field might be completely destroyed.’ (I infer) c. Ancha-p-shi wa’a-chi-nki wamla-a-ta too-much-GEN-REP cry-CAUS- girl--ACC ‘You make my daughter cry too much.’ (they tell me)

This three-term system is typically found in all Quechua languages. Other languages with this kind of three-term system are Amdo Tibetan (Bodic; Sino-Tibetan: China), Bora (Boran; Huitotoan: Colombia and Peru), Maidu (Maiduan; Penutian: US), Ponca (Dhegihan; Siouan: US), 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 4 T Y P O L O G Y O F E V I D E N T I A L I T Y S Y S T E M S

Qiang (Qiangic; Sino-Tibetan: China), Shilluk (Nilotic; Eastern Sudanic: Sudan), and Shasta (Hokan: US). The three-term system in (ii) is said to be at work in Washo (isolate: US) and possibly in Siona (Tucanoan: Ecuador and Colombia) (Aikhenvald : ). The three-term system in (iii), with visual, non-visual sensory, and reported, is attested in Oksapmin, Maricopa (Yuman; Hokan: US), and Dulong (Nungish; Sino-Tibetan: China). Examples of the system come from Oksapmin: visual (a), non-visual sensory (b), and reported (c). Note that visual evidential is formally unmarked (i.e. zero marking). () Oksapmin (isolate: Papua New Guinea) a. yot haan ihitsi nuhur waaihpaa two men they.two we went.down ‘Two other men and I went down.’ (I saw it) b. imaah gapgwe na-ha-m hah-h-mur pig good.smell to.me-do-SEQ do-IMM.PST.SG.STATEMENT ‘Some pork is roasting (to me).’ (I smell it) c. Haperaapnong mahan kuu gaamin tit Haperap.to over.there woman husband.and.wife one pipaa-ri went-REP ‘A husband and a wife went (reportedly) over there to Haperap.’ The three-term systems in (ii) and (iii) are said to be relatively uncommon (Aikhenvald : ). The three-term system in (iv) is built on the distinctions between non-visual sensory, inferred, and reported. Enets (Samoyedic; Uralic: Russia), Retuarã (Tucanoan: Colombia), and Nganasan are languages that have this evidentiality system. In Nganasan, the non-visual sensory evidentiality marker can cover not only auditory (a) but also olfactory (b) and tactile (c) sensations, as in: () Nganasan (Samoyedic; Uralic: Russia) a. Nogutəmunu-t’i miiʔa come.close-NONVIS.EV-DU here ‘The two of them are coming close.’ (one can hear them come) b. Ma-tənu hihiə koli ńeluaj-müńü-t’u house-LOC boiled fish feel/smell-NONVIS.EV-SG ‘There is a smell of boiled fish in the house.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

c. . . . kobtua ŋatə-munu-tʼu nəndʼi-tiə maʔ . . . girl found-NONVIS.EV-SG stand-PART.PRES house ‘. . . a girl (who has left her house during a snowstorm and cannot see anything) felt (i.e. found by feeling) a standing house’. The inferred evidentiality marker, illustrated earlier, is repeated here (a), along with the reported evidentiality marker (b). ()

Nganasan (Samoyedic; Uralic: Russia) a. T’eliʔimidʼi-ʔə-ʔ baarbə-ƌuŋ huntə-ƌuŋ i-huaƌu brake-PERF-PL master-PL authority-PL be-INFR ‘They braked (following the master’s order); (one infers that) their master was an authority for them.’ b. Munə-ʔ: ‘Sünəƌiʔ Nənikü mənə kontu-nantu-baŋhu’ say-IMP (name) I(-ACC) take.away-VOL-REP ‘Say (to your brother): “Sünəƌiʔ Nənikü wants to take me away, reported”.’

The remaining three-term evidentiality system, i.e. (v), makes a finer distinction between reportative and quotative, as opposed to ‘everything else’. In other words, this three-term system is somewhat akin to one of the two-term systems, namely (iii) in §.., although reported information is further assessed in terms of whether it comes from a specified or unspecified source. This three-term system is attested in a few North American Indian languages, as illustrated in: ()

Comanche (Numic; Uto-Aztecan: US) patsi a. hãã me-se sutɨ= yes QUOT-CNTR that.one older.sister ‘The older sister said, “yes”.’ ‘yes’ me-kɨ b. sutɨ=-se that.one-CNTR yes QUOT-NARR.PST ‘He (Coyote) said “yes”, it is said.’

In (a), the quotative particle me is used to mark a direct quotation (‘The older sister said’). In (b), the same quotative particle co-occurs with the narrative past particle kɨ, as a direct quotation is included in a story told in narrative past (i.e. reported). In Comanche, the narrative past particle is used to mark information that the speaker acquired from folktales and other events that s/he learned about from other people. 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 4 T Y P O L O G Y O F E V I D E N T I A L I T Y S Y S T E M S

15.4.3 Four-term systems Three different four-term systems have been identified in Aikhenvald’s () study: (i) Visual, Non-visual sensory, Inferred, Reported; (ii) Direct (or Visual), Inferred, Assumed, Reported; (iii) Direct, Inferred, Reported, Quotative. One interesting thing about these four-term systems is that if there is only one sensory distinction in the evidentiality system, either of the two situations obtains: the evidentiality system makes a finer distinction either between inference and assumption or between reported and quotative. Conversely, if there are two sensory distinctions, the evidentiality system makes a first-order distinction between inferred and reported. The four-term system in (i) is illustrated by Tucano: () Tucano (Tucanoan: Colombia and Brazil) a. diȃyɨ wa’î-re yaha-ámi dog fish-TOP.NON.A/S steal-REC.P.VIS.SGNF ‘The dog stole the fish.’ (I saw it) b. diȃyɨ wa’î-re yaha-ásĩ dog fish-TOP.NON.A/S steal-REC.P.NONVIS.SGNF ‘The dog stole the fish.’ (I heard the noise) c. diȃyɨ wa’î-re yaha-ápĩ dog fish-TOP.NON.A/S steal-REC.P.INFR.SGNF ‘The dog stole the fish.’ (I inferred it) d. diȃyɨ wa’î-re yaha-ápɨʼ dog fish-TOP.NON.A/S steal-REC.P.REP.SGNF ‘The dog stole the fish.’ (I have learnt it from someone else) Eastern Pomo (Pomoan; Hokan: US), Hupa (Athapaskan; Na-Dene: US), Ladakhi (Bodic; Sino-Tibetan: India), and Shibacha Lisu (Burmese-Lolo; Sino-Tibetan: China) are also languages with this kind of four-term evidentiality system. The second four-term system (ii) is attested in languages such as Shipibo-Konibo (Panoan: Peru), Tsafiki (Barbacoan: Ecuador), Yanomámi (Yanomam: Brazil), and Mamaindé, and exemplified in: 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

()

Mamaindé (Nambikuaran: Brazil) a. wa3kon3-Ø-na2hẽ3-la2 work-SG-VIS.PST-PERF ‘He worked (yesterday; I saw him).’ b. wa3kon3-Ø-nũ2hẽ3-la2 work-SG-INFR.PST-PERF ‘He worked (yesterday; I inferred this based on visual evidence).’ kai3l-a2 c. ti3ka3l-a2 anteater-DEF ant-DEF yain-Ø-te2ju2hẽ3-la2 eat-SG-GENERALKNOWLEDGE.EV-PERF ‘The anteater habitually eats ants (I know this as general knowledge).’ d. wa3kon3-Ø-ta1hxai2-hẽ3-la2 work-SG-REP.PST-PERF ‘He worked (I was told).’

The last four-term evidentiality system comes with a finer distinction between reported and quotative, as illustrated in: ()

Cora (Corachol; Uto-Aztecan: Mexico) a. a’ac̃ú ku rí̵’ɨ na-a-rí̵’h somewhat DIR.EV well me-COMPL-do ‘It made me a little better.’ b. ah pú-’i há’a=hi-(y)a’-a-káa-va-cɨ séin then SBJV-SEQ be=NARR-away-outside-down-fall-PST INFR ɨ tyas̃ka ART scorpion ‘Apparently the scorpion dropped down from there.’ c. ayáa pá nú’u tyú-hu’-u-rí̵’h thus SBJV HEARSAY DISTR-NARR-COMPL-do ‘This is, it is said, what took place.’ d. y-én peh yée wa-híhwa mwáa, here-top .SUBR QUOT COMPL-yell SG yáa pú nú’u hí tyí-r-aa-ta-hée PROCOMP SBJV REP SEQ DISTR-DISTR.SG-COMPL-PERF-tell ‘“From right up on top here, you will call out loud and clear”, that is what she called on him to do.’ 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 4 T Y P O L O G Y O F E V I D E N T I A L I T Y S Y S T E M S

Other languages with this type of four-term evidentiality system are Northern Embera (Choco: Colombia) and Southern Tepehuan (Tepiman; Uto-Aztecan: Mexico).

15.4.4 Languages with more than four evidentiality distinctions Languages with five or more evidentiality distinctions do not seem to be abundant. In fact, Aikhenvald (: ) comments that few languages of this kind have been clearly analysed to exist. One such language has already been illustrated in (), reproduced here. ()

Tariana (Inland Northern Arawakan; Arawakan: Brazil) a. Juse iɾida di-manika-ka José football SG.NF-play-REC.P.VIS ‘José has played football.’ (we saw it) b. Juse iɾida di-manika-mahka José football SG.NF-play-REC.P.NONVIS ‘José has played football.’ (we heard it) c. Juse iɾida di-manika-nihka José football SG.NF-play-REC.P.INFR ‘José has played football.’ (we infer it from visual evidence) d. Juse iɾida di-manika-sika José football SG.NF-play-REC.P.ASSUM ‘José has played football.’ (we assume this on the basis of what we already know) e. Juse iɾida di-manika-pidaka José football SG.NF-play-REC.P.REP ‘José has played football.’ (we were told)

Foe (Kutubuan; Trans-New Guinea: Papua New Guinea) is reported to have six evidentiality distinctions: participatory (i.e. speaker’s participation; see Plungian : –), visual, non-visual, deductive, visual evidence, and previous evidence. Other languages with five or more evidentiality distinctions are Desano (Tucanoan: Colombia and Brazil), Kashaya (Pomoan; Hokan: US), Nambikuára (Nambikuaran: Brazil), Tuyuca (Tucanoan: Colombia and Brazil), and Wintu (Wintuan; Penutian: US). 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

15.4.5 Multiple evidentiality subsystems The various evidentiality systems surveyed in the preceding sections tend to constitute a single grammatical system each but frequently than not, languages may make use of more than one (sub)system. Evidence of the existence of multiple subsystems may come from the fact that evidentiality marking appears in different morphological forms. For instance, in Diyari the sensory evidential is a suffix (i.e. -ku) whereas the reported is a clitic (i.e. pin̪t̪i), as in: ()

Diyari (Central Pama-Nyungan; Pama-Nyungan: Australia) a. ŋapa t̪alar̩a wakar̩a-l̪a ŋana-yi-ku water rain.ABS come-FUT AUX-PRES-SENS.EV ‘It looks/feels/smells like rain will come.’ b. pin̪ t ̪i n̪awa wakar̩a-yi REP SG.FUT.S come-PRES ‘They say he is coming.’

Other evidence may come from the fact that two evidentials co-occur in one and the same clause, as in: ()

Qiang (Qiangic; Sino-Tibetan: China) oh the: ʑbə ʑete-k-u oh SG drum beat-INFR-VIS ‘Oh, he WAS playing a drum.’

The situation expressed in () is one in which the speaker initially drew an inference, which s/he was able to confirm on the basis of what s/he saw subsequently: ‘I could hear someone playing a drum in the gym, and when I went over there, I saw John playing a drum’.

15.5 Evidentiality and other grammatical categories There may be grammatical constraints on the distribution of evidentiality marking (also Aikhenvald  for an updated treatment). For instance, the number of evidentiality distinctions may differ from one clause type to another. There are certainly languages that maintain the same number of evidentiality distinctions in statements and questions, e.g. Nganasan (Samoyedic; Uralic: Russia), Qiang (Qiangic; Sino-Tibetan: 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 5 E V I D E N T I A L I T Y A N D O T H E R G R A M M A T I C A L C A T E G O R I E S

China), Quechua (Quechuan: Bolivia, Ecuador, and Peru), and Tsafiki (Barbacoan: Ecuador). As Aikhenvald (: ) reports, however, in the majority of her sample languages more evidentiality distinctions are maintained in statements than in any other clause types, e.g. questions, commands. In other words, there are certain constraints on evidentiality marking in questions or commands that do not apply in statements. Tucano (Tucanoan: Colombia and Brazil) operates with a four-term system for statements (visual, non-visual sensory, inferred, and reported), a three-term system for questions (visual, non-visual sensory, and inferred), and a two-term system for commands (reported vs ‘everything else’). Similarly, Tariana (Inland Northern Arawakan; Arawakan: Brazil) uses a five-term system for statements (visual, non-visual sensory, inferred, assumed, and reported), a three-term system for questions and apprehensive clauses (visual, non-visual sensory, and inferred), and two-term systems for commands (reported vs ‘everything else’) and purposive clauses (first-hand and non-first-hand). Moreover, dependent clauses such as relative clauses, complement clauses, and other subordinate clauses never have more evidentiality distinctions than main clauses. Indeed, some languages allow no evidentiality distinctions in dependent clauses, as in Abkhaz (Northwest Caucasian: Georgia), Baniwa (Alto-Orinoco; Arawakan: Brazil and Venezuela), Eastern Pomo (Pomoan; Hokan: US), Fasu (Trans-New Guinea: Papua New Guinea), and Turkic languages. There is a grammatical reason why dependent clauses only allow a reduced number of evidentiality distinctions, if any at all. Evidentiality marking tends to be fused with tense/person marking, and dependent clauses show reduced or no tense/person marking due to their reduced clausehood. Not surprisingly, evidentiality marking, in turn, behaves similarly in dependent, as opposed to main, clauses. Generally speaking, languages tend to have fewer evidentiality distinctions in a non-past tense than in a past tense. In point of fact, many languages—e.g. Eastern Pomo (Pomoan; Hokan: US), Qiang (Qiangic; Sino-Tibetan: China), Tariana (Inland Northern Arawakan; Arawakan: Brazil)—do not make evidentiality distinctions in the future. This makes sense because while one can be witness to events that took place in the past, one cannot be witness to events that have not yet taken place. This is probably why evidential marking is likely to develop additional overtones when used in the future tense: one cannot acquire direct or first-hand evidence of an event that has not yet happened. For instance, 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

in Shipibo-Konibo (Panoan: Peru) the direct evidential -ra with future tense expresses certainty rather than first-hand evidence. There is a further complication with evidentiality marking in the future tense. In Wanka Quechua, how a direct evidential in the future tense is interpreted depends on person. The direct evidential with third person expresses the speaker’s certainty about the event taking place, while the same evidential with first person indicates the speaker’s intention or determination to carry out some action. These different interpretations are illustrated in: ()

Wanka Quechua (Quechuan: Peru) a. kuti-mu-n’a-m return-TRANSLOC--FUT-DIR.EV ‘(When brother Luis arrived, he said to me) She will return.’ b. agulpis-si ya’a ma’a-shrayki-m hitting-even I beat->.FUT-DIR.EV ‘I’ll even beat it [the truth] out of you.’

Moreover, different evidentiality distinctions may have different tense distinctions. For instance, in Nganasan (Samoyedic; Uralic: Russia), the non-visual and inferred evidentials do not make any tense distinction in contrast to the reported evidential, which makes a distinction between future and non-future. In Tucano (Tucanoan: Colombia and Brazil), inferred and reported evidentials have recent past and remote past tense forms but no present tense forms. The other evidentials, i.e. visual and non-visual, in contrast, do have tense distinctions in the present, recent past, and remote past tense. 15.6 Concluding remarks In a good number of languages, it is not grammatically sufficient to provide information without indicating how that information has been acquired. In these languages, the speaker must indicate the source of information by marking the information according to the way the source of information or knowledge is categorized in her/his language. This grammatical means of marking information source is known as evidential or evidentiality marking. Roughly speaking, one in four languages in the world is said to have evidentiality marking. While it is not uncommon, evidentiality marking is certainly not common in the 

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

15 . 6 C O N C L U D I N G R E M A R K S

world’s languages. To some readers, therefore, it may perhaps come as a surprise that it has received the considerable amount of attention from linguists that it has over the last two or three decades. This is possibly because evidentiality is such an exotic grammatical category from the perspective of major European languages, including English, and also probably because it is a most direct reflection of the way the human mind categorizes information sources. In this chapter, the most recurrent evidentiality distinctions in the world’s languages have each been discussed with supporting examples. Moreover, a typology of evidentiality systems, based on the number of evidentiality distinctions made, has been outlined, together with data from exemplifying languages. Finally, a brief discussion of grammatical constraints on the distribution of evidentiality marking has also been provided.

Study questions 1. Consider the following English sentences and see what evidential meaning is expressed in each of them. Also explain why English is thought to lack an evidentiality system despite the fact that it can freely express various evidential meanings. (1) (2) (3) (4) (5) (6)

Apparently, the singer shot the deputy. The singer, it seems, shot the deputy. The singer, we assume, shot the sheriff. The singer reportedly shot the sheriff. The singer shot the sheriff, I heard it in my room. The singer, the deputy told me, shot the sheriff.

2. One of the meaning extensions of evidentiality is the expression of the speaker’s evaluation of new information as unexpected and/or surprising. DeLancey’s () work is instrumental in recognizing this aspect of the speaker’s evaluation of information as a full-fledged grammatical category in some languages, not just as an extension of evidentiality. This ‘new’ grammatical category, as exemplified in () below, is referred to in the literature as mirativity. ()

Lhasa Tibetan (Bodic; Sino-Tibetan: China) ṅar dṅul tog-tsam h̩ dug me-OBL money some exist-TES ‘I have some money (quite to my surprise).’



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

EVIDENTIALITY MARKING

DeLancey’s position is not entirely uncontroversial, however. In particular, Hill () challenges the concept of mirativity, with special reference to the sensory evidential h̩dug in Lhasa Tibetan, arguing that it does not exist as a grammatical category. DeLancey (), in turn, rejects Hill’s analysis, insisting on mirativity as a grammatical category. Evaluate these two opposing positions, decide which position you find more convincing, and explain why. To assist with your answer, see DeLancey (, ) and Hill (). 3. Choose two or more evidentiality-marking languages that you have access to (e.g. grammatical descriptions in your university library or international students in your class) and determine what type of evidentiality system (including semantic parameters of evidentiality) they employ. Also discuss grammatical constraints on the distribution of evidentiality marking in terms of clause types, tense, and aspect. 4. Find out how the languages investigated in Question  mark dreams and supernatural phenomena (e.g. evil spirits, shamanic powers, visions, and experiences) in terms of evidentiality, that is, evidential marking used in recounting dreams and describing supernatural phenomena. Further reading Aikhenvald, A. Y. (). ‘Evidentiality in Typological Perspective’, in A. Y. Aikhenvald and R. M. W. Dixon (eds.), Studies in Evidentiality. Amsterdam: John Benjamins, –. Aikhenvald, A. Y. (). Evidentiality. Oxford: Oxford University Press. Aikhenvald, A. Y. (). ‘Evidentials: Their Links with Other Grammatical Categories’. Linguistic Typology : –. Brugman, C. and Macaulay, M. (). ‘Characterizing Evidentiality’, Linguistic Typology : –. Jacobsen, Jr., W. H. (). ‘The Heterogeneity of Evidentials in Makah’, in W. Chafe and J. Nichols (eds.), Evidentiality: The Linguistic Coding of Epistemology. Norwood, NJ: Ablex, –. Plungian, V. (). ‘Types of Verbal Evidentiality Marking: An Overview’, in G. Diewald and E. Smirnova (eds.), Linguistic Realization of Evidentiality in European Languages. Berlin: Mouton de Gruyter, –. Willett, T. (). ‘A Cross-Linguistic Survey of the Grammaticalization of Evidentiality’, Studies in Language : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

References Abney, S. R. (). ‘The Noun Phrase in Its Sentential Aspect’, Ph.D. diss., MIT, Cambridge, MA. Aikhenvald, A. Y. (). ‘Evidentiality in Typological Perspective’, in Aikhenvald and Dixon (: –). Aikhenvald, A. Y. (). Evidentiality. Oxford: Oxford University Press. Aikhenvald, A. Y. (). ‘Evidentials: Their Links with Other Grammatical Categories’, Linguistic Typology : –. Aikhenvald, A. Y. and Dixon, R. M. W. (eds.) (). Studies in Evidentiality. Amsterdam: John Benjamins. Anderson, L. B. (). ‘The “Perfect” as a Universal and Language-Particular Category’, in P. Hopper (ed.), Tense-Aspect: Between Semantics and Pragmatics. Amsterdam: John Benjamins, –. Ariel, M. (). Accessing Noun Phrase Antecedents. London: Croom Helm. Bakker, P. (). ‘Language Sampling’, in Song (: –). Bell, A. (). ‘Language Samples’, in Greenberg et al. (: –). Bhat, D. N. S. (). Pronouns. Oxford: Oxford University Press. Bickel, B. (). ‘Introduction: Person and Evidence in Himalayan Languages’, Linguistics of the Tibeto-Burman Area : –. Bickel, B. (). ‘Typology in the st Century: Major Current Developments’, Linguistic Typology : –. Bickel, B. (a). ‘A Refined Sampling Procedure for Genealogical Control’, Sprachtypologie und Universalienforschung : –. Bickel, B. (b). ‘On the Scope of the Referential Hierarchy in the Typology of Grammatical Relations’, in G. G. Corbett and M. Noonan (eds.), Case and Grammatical Relations: Studies in Honor of Bernard Comrie. Amsterdam: John Benjamins, –. Bickel, B. (). ‘Grammatical Relations Typology’, in Song (: –). Bickel, B. and Nichols, J. (). ‘Case Marking and Alignment’, in A. Malchukov and A. Spencer (eds.), The Oxford Handbook of Case. Oxford: Oxford University Press, –. Blake, B. J. (). Relational Grammar. London: Routledge. Blake, B. J. (). Case. nd edn. Cambridge: Cambridge University Press. Blansitt, E. L. (). ‘Dechticaetiative and Dative’, in F. Plank (ed.), Objects. London: Academic Press, –.

OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Blansitt, E. L. (). ‘Datives and Allatives’, in M. Hammond, E. Moravcsik, and J. Wirth (eds.), Studies in Syntactic Typology. Amsterdam: John Benjamins, –. Blevins, J. (). ‘The Syllable in Phonological Theory’, in J. Goldsmith (ed.), The Handbook of Phonological Theory. Oxford: Blackwell, –. Blevins, J. (). Evolutionary Phonology: The Emergence of Sound Patterns. Cambridge: Cambridge University Press. Boas, F. (). ‘Introduction’, in F. Boas, Handbook of American Indian Languages. Washington, DC: Bureau of American Ethnology/Government Printing Office, –. Bopp, F. (). ‘Über J. Grimm’s deutsche Grammatik’, repr. in F. Bopp, Vocalismus. Berlin: Nicolaische Buchhandlung, –. Breen, G. and Pensalfini, R. (). ‘Arrernte: A Language with No Syllable Onsets’, Linguistic Inquiry : –. Brugman, C. and Macaulay, M. (). ‘Characterizing Evidentiality’, Linguistic Typology : –. Bybee, J. L. (). Morphology: A Study of the Relation between Meaning and Form. Amsterdam: John Benjamins. Bybee, J. L. (). ‘Markedness: Iconicity, Economy and Frequency’, in Song (: –). Bybee, J. L., Perkins, R., and Pagliuca, W. (). The Evolution of Grammar: Tense, Aspect, and Modality in the Languages of the World. Chicago: University of Chicago Press. Campbell, L. (). Review of J. H. Greenberg (), Language in the Americas, Language : –. Campbell, L. (). Historical Linguistics: An Introduction. rd edn. Cambridge, MA: MIT Press. Campbell, L., Kaufman, T., and Smith-Stark, T. C. (). ‘Meso-America as a Linguistic Area’, Language : –. Cavalli-Sforza, L. L. (). Genes, Peoples, and Languages. Berkeley: University of California Press. Chafe, W. and Nichols, J. (eds.) (). Evidentiality: The Linguistic Coding of Epistemology. Norwood, NJ: Ablex. Chomsky, N. (). ‘Remarks on Nominalization’, in R. A. Jacobs and P. S. Rosenbaum (eds.), Readings in English Transformational Grammar. Waltham, MA: Ginn, –. Chomsky, N. (). ‘On Cognitive Structures and Their Development: A Reply to Piaget’, in M. Piattelli-Palmarini (ed.), Language and Learning: The Debate between Jean Piaget and Noam Chomsky. Cambridge, MA: Harvard University Press, –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Chomsky, N. and Lasnik, H. (). ‘The Theory of Principles and Parameters’, in J. Jacobs, A. von Stechow, W. Sternefeld, and T. Vennemann (eds.), Syntax: An International Handbook of Contemporary Research. Berlin: Walter de Gruyter, –. Clark, L. (). Turkmen Reference Grammar. Wiesbaden: Harrassowitz Verlag. Clements, G. N. (). ‘The Role of the Sonority Cycle in Core Syllabification’, in J. Kingston and M. E. Beckman (eds.), Papers in Laboratory Phonology I: Between the Grammar and Physics of Speech. Cambridge: Cambridge University Press, –. Collinge, N. E. (). ‘History of Historical Linguistics’, in E. F. K. Koerner and R. E. Asher (eds.), Concise History of the Language Sciences: From the Sumerians to the Cognitivists. Oxford: Pergamon, –. Comrie, B. (). ‘Causatives and Universal Grammar’, Transactions of the Philological Society (): –. Comrie, B. (a). Aspect. Cambridge: Cambridge University Press. Comrie, B. (b). ‘The Syntax of Causative Constructions: Cross-Language Similarities and Divergences’, in M. Shibatani (ed.), The Grammar of Causative Constructions. New York: Academic Press, –. Comrie, B. (). ‘Ergativity’, in Lehmann (a: –). Comrie, B. (). Language Universals and Linguistic Typology: Syntax and Morphology. nd edn. Oxford: Blackwell. Comrie, B. (). ‘Syntactic Typology’, in R. Mairal and J. Gil (eds.), Linguistic Universals. Cambridge: Cambridge University Press, –. Comrie, B. (). ‘Alignment of Case Marking of Full Noun Phrases’, in Dryer and Haspelmath (: http://wals.info/chapter/). Comrie, B., Dryer, M. S., Gil, D., and Haspelmath, M. (). ‘Introduction’, in Haspelmath et al. (: –). Comrie, B. and Horie, K. (). ‘Complement Clauses versus Relative Clauses: Some Khmer Evidence’, in W. Abraham, T. Givón, and S. A. Thompson (eds.), Discourse Grammar and Typology. Amsterdam: John Benjamins, –. Comrie, B. and Kuteva, T. (a). ‘Relativization on Subjects’, in Dryer and Haspelmath (: http://wals.info/chapter/). Comrie, B. and Kuteva, T. (b). ‘Relativization on Obliques’, in Dryer and Haspelmath (: http://wals.info/chapter/). Comrie, B. and Kuteva, T. (c). ‘Relativization Strategies’, in Dryer and Haspelmath (: http://wals.info/chapter/s). Cooreman, A. (). ‘A Functional Typology of Antipassives’, in B. Fox and P. J. Hopper (eds.), Voice: Form and Function. Amsterdam: John Benjamins, –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Corbett, G. G. (). Number. Cambridge: Cambridge University Press. Corbett, G. G. (). Agreement. Cambridge: Cambridge University Press. Cristofaro, S. (). ‘Grammatical Categories and Relations: Universality vs. Language-Specificity and Construction-Specificity’, Language and Linguistics Compass .: –. Croft, W. (). Typology and Universals. Cambridge: Cambridge University Press. Croft, W. (). ‘Modern Syntactic Typology’, in Shibatani and Bynon (a: –). Croft, W. (). Radical Construction Grammar: Syntactic Theory in Typological Perspective. Oxford: Oxford University Press. Croft, W. (). Typology and Universals. nd edn. Cambridge: Cambridge University Press. Croft, W., Denning, K., and Kemmer, S. (). Studies in Typology and Diachrony: Papers Presented to Joseph H. Greenberg on His th Birthday. Amsterdam: John Benjamins. Croft, W. and Poole, K. T. (). ‘Inferring Universals from Grammatical Variation: Multidimensional Scaling for Typological Analysis’, Theoretical Linguistics : –. Cysouw, M. (). ‘Review of M. Haspelmath, Indefinite Pronouns’, Journal of Linguistics : –. Cysouw, M. (). The Paradigmatic Structure of Person Marking. Oxford: Oxford University Press. Cysouw, M. (). ‘Building Semantic Maps: The Case of Person Marking’, in M. Miestamo and B. Wälchli (eds.), New Challenges in Typology: Broadening the Horizons and Redefining the Foundations. Berlin: Mouton de Gruyter, –. Cysouw, M. (). ‘Semantic Maps as Metrics on Meaning’, Linguistic Discovery : –. Cysouw, M. (a). ‘Inclusive/Exclusive Distinction in Independent Pronouns’, in Dryer and Haspelmath (: http://wals.info/chapter/). Cysouw, M. (b). ‘Inclusive/Exclusive Distinction in Verbal Inflection’, in Dryer and Haspelmath (: http://wals.info/chapter/). Cysouw, M. and Wälchli, B. (). ‘Parallel Texts: Using Translational Equivalents in Linguistic Typology’, Sprachtypologie und Universalienforschung : –. Dahl, Ö. (). Tense and Aspect Systems. Oxford: Basil Blackwell. Dahl, Ö. (). ‘From Questionnaires to Parallel Corpora in Typology’, Sprachtypologie und Universalienforschung : –. De Haan, F. (). ‘Evidentiality and Epistemic Modality: Setting Boundaries’, Southwest Journal of Linguistics : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

DeLancey, S. (). ‘An Interpretation of Split Ergativity and Related Patterns’, Language : –. DeLancey, S. (). ‘Mirativity: The Grammatical Marking of Unexpected Information’, Linguistic Typology : –. DeLancey, S. (). ‘Still Mirative After All These Years’, Linguistic Typology : –. Derbyshire, D. C. (). ‘Word Order Universals and the Existence of OVS Languages’, Linguistic Inquiry : –. Derbyshire, D. C. and Pullum, G. K. (). ‘Object-initial Languages’, International Journal of American Linguistics : –. Derbyshire, D. C. and Pullum, G. K. (). Handbook of Amazonian Languages. Berlin: Mouton de Gruyter. Dixon, R. M. W. (). The Dyirbal Language of North Queensland. Cambridge: Cambridge University Press. Dixon, R. M. W. (ed.) (). Grammatical Categories in Australian Languages. Atlantic Highlands, NJ: Humanities Press. Dixon, R. M. W. (). ‘Ergativity’, Language : –. Dixon, R. M. W. (). Ergativity. Cambridge: Cambridge University Press. Dixon, R. M. W. (). The Rise and Fall of Languages. Cambridge: Cambridge University Press. Dixon, R. M. W. (). ‘A Typology of Causatives: Form, Syntax and Meaning’, in Dixon and Aikhenvald (: –). Dixon, R. M. W. and Aikhenvald, A. Y. (eds.) (). Changing Valency: Case Studies in Transitivity. Cambridge: Cambridge University Press. Donohue, M. and Wichmann, S. (eds.) (). The Typology of Semantic Alignment. Oxford: Oxford University Press. Downing, P. A. (). Numeral Classifier Systems: The Case of Japanese. Amsterdam: John Benjamins. Dressler, W. (). ‘Eine textsyntaktische Regel er idg. Wortstellung’, Zeitschrift für Vergleichende Sprachforschung : –. Dressler, W. (). ‘Notes on Textual Typology’, Wiener Linguistische Gazette : –. Driver, H. E. and Chaney, R. P. (). ‘Cross-cultural Sampling and Galton’s Problem’, in R. Naroll and R. Cohen (eds.), A Handbook of Method in Cultural Anthropology. Garden City, NY: Natural History Press, –. Dryer, M. S. (). ‘Primary Objects, Secondary Objects, and Antidative’, Language : –. Dryer, M. S. (). ‘Object–Verb Order and Adjective–Noun Order: Dispelling a Myth’, Lingua : –. Dryer, M. S. (). ‘Large Linguistic Areas and Language Sampling’, Studies in Language : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Dryer, M. S. (). ‘SVO Languages and the OV: VO Typology’, Journal of Linguistics : –. Dryer, M. S. (). ‘The Greenbergian Word Order Correlations’, Language : –. Dryer, M. S. (). ‘Frequency and Pragmatically Unmarked Word Order’, in P. Downing and M. Noonan (eds.), Word Order in Discourse. Amsterdam: John Benjamins, –. Dryer, M. S. (). ‘Are Grammatical Relations Universal?’, in J. Bybee, J. Haiman, and S. A. Thompson (eds.), Essays on Language Function and Language Type Dedicated to T. Givón. Amsterdam: John Benjamins, –. Dryer, M. S. (). ‘Why Statistical Universals are Better than Absolute Universals’, Chicago Linguistic Society : –. Dryer, M. S. (). ‘Counting Genera vs. Counting Languages’, Linguistic Typology : –. Dryer, M. S. (). ‘Word Order in Sino-Tibetan Languages from a Typological and Geographical Perspective’, in G. Thurgood and R. LaPolla (eds.), Sino-Tibetan Languages. London: Routledge, –. Dryer, M. S. (). ‘Word Order’, in T. Shopen (ed.), Language Typology and Syntactic Description, vol. : Clause Structure. nd edn. Cambridge: Cambridge University Press, –. Dryer, M. S. (). ‘Problems Testing Typological Correlations with the Online WALS’, Linguistic Typology : –. Dryer, M. S. (a). ‘Order of Subject, Object and Verb’, in Dryer and Haspelmath (: http://wals.info/chapter/). Dryer, M. S. (b). ‘Order of Object and Verb’, in Dryer and Haspelmath (: http://wals.info/chapter/). Dryer, M. S. (c). ‘Order of Adposition and Noun Phrase’, in Dryer and Haspelmath (: http://wals.info/chapter/). Dryer, M. S. (d). ‘Relationship between the Order of Object and Verb and the Order of Adposition and Noun Phrase’, in Dryer and Haspelmath (: http://wals.info/chapter/). Dryer, M. S. (e). ‘Relationship between the Order of Object and Verb and the Order of Relative Clause and Noun’, in Dryer and Haspelmath (: http://wals.info/chapter/). Dryer, M. S. (f). ‘Determining Dominant Word Order’, in Dryer and Haspelmath (: http://wals.info/chapter/s). Dryer, M. S. and Haspelmath, M. (eds.) (). The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. http://wals.info. Du Feu, V. (). Rapanui. London: Routledge.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Evans, N. (). Dying Words: Endangered Languages and What They Have to Tell Us. Malden, MA: Wiley-Blackwell. Evans, N. and Levinson, S. C. (). ‘The Myth of Language Universals: Language Diversity and Its Importance for Cognitive Science’, Behavioral and Brain Sciences : –. Everaert, M., Musgrave, S., and Dimitriadis, A. (eds.) (). The Use of Databases in Cross-Linguistic Studies. Berlin: Mouton de Gruyter. Ferguson, C. A. (). ‘Assumptions about Nasals: A Sample Study in Phonological Universals’, in Greenberg (a: –). Ferguson, C. A. and Chowdhury, M. (). ‘The Phonemes of Bengali’, Language : –. Finck, F. N. (). Der deutsche sprachbau als ausdruck deutscher weltanschauung Acht vorträge. Marburg: Elwert. Fox, A. (). Linguistic Reconstruction: An Introduction to Theory and Method. Oxford: Oxford University Press. Gabelentz, G. von der (). Die Sprachwissenschaft: Ihre Aufgaben, Methoden und bisherigen Ergebnisse. nd edn. Leipzig: Tauchnitz. Gary, J. O. and Keenan, E. L. (). ‘On Collapsing Grammatical Relations in Universal Grammar’, in P. Cole and J. Sadock (eds.), Syntax and Semantics : Grammatical Relations. New York: Academic Press, –. Gazdar, G., Pullum, G. K., and Sag, I. A. (). ‘Auxiliaries and Related Phenomena in a Restrictive Theory of Grammar’, Language : –. Geeraerts, D. (). ‘Cognitive Restrictions on the Structure of Semantic Change’, in J. Fisiak (ed.), Historical Semantics. Berlin: Mouton de Gruyter, –. Gerdts, D. B. (). ‘Incorporation’, in A. Spencer and A. Zwicky (eds.), The Handbook of Morphology. Oxford: Basil Blackwell, –. Givón, T. (). On Understanding Grammar. New York: Academic Press. Givón, T. (ed.) (). Topic Continuity in Discourse. Amsterdam: John Benjamins. Goedemans, R. and van der Hulst, H. (). ‘Fixed Stress Locations’, in Dryer and Haspelmath (: http://wals.info/chapter/). Göksel, A. and Kerslake, C. (). Turkish: A Comprehensive Grammar. London: Routledge. Gordon, M. K. (). ‘A Perceptually-Driven Account of Onset-Sensitive Stress’, Natural Language and Linguistic Theory : –. Gordon, M. K. (). Phonological Typology. Oxford: Oxford University Press. Graffi, G. (). ‘The Pioneers of Linguistic Typology: From Gabelentz to Greenberg’, in Song (: –). Greenberg, J. H. (ed.) (a). Universals of Language. Cambridge, MA: MIT Press.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Greenberg, J. H. (b). ‘Some Universals of Grammar with Particular Reference to the Order of Meaningful Elements’, in Greenberg (a: –). Greenberg, J. H. ( []). Language Universals, with Special Reference to Feature Hierarchies. The Hague: Mouton. Greenberg, J. H. (). Language Typology: A Historical and Analytic Overview. The Hague: Mouton. Greenberg, J. H. (a). ‘Typology and Cross-linguistic Generalizations’, in Greenberg et al. (: –). Greenberg, J. H. (b). ‘Some Generalizations Concerning Initial and Final Consonant Clusters’, in J. H. Greenberg (ed.), Universals of Human Language, vol. : Phonology. Stanford, CA: Stanford University Press, –. Greenberg, J. H. (). ‘On Being a Linguistic Anthropologist’, Annual Review of Anthropology : –. Greenberg, J. H. (). Language in the Americas. Stanford, CA: Stanford University Press. Greenberg, J. H., Ferguson, C. A., and Moravcsik, E. A. (eds.) (). Universals of Human Language, vol. : Method and Theory. Stanford, CA: Stanford University Press. Grenoble, L. A. and Whaley, L. J. (). Saving Languages: An Introduction to Language Revitalization. Cambridge: Cambridge University Press. Haiman, J. (). ‘Iconic and Economic Motivation’, Language : –. Haiman, J. (). Natural Syntax: Iconicity and Erosion. Cambridge: Cambridge University Press. Hajek, J. (). ‘Vowel Nasalization’, in Dryer and Haspelmath (: http:// wals.info/chapter/). Hale, K. (). ‘The Adjoined Relative Clause in Australia’, in Dixon (: –). Handschuh, C. (). A Typology of Marked-S Languages. Berlin: Language Science Press. Harris, A. C. (). ‘Review of R. M. W. Dixon, Ergativity’, Language : –. Hartmann, I., Haspelmath, M., and Taylor, B. (eds.) (). Valency Patterns Leipzig. Leipzig: Max Planck Institute for Evolutionary Anthropology. http://valpal.info. Hashimoto, M. J. (). ‘The Altaicization of Northern Chinese’, in J. McCoy and T. Light (eds.), Contributions to Sino-Tibetan Studies. Leiden: E. J. Brill, –. Haspelmath, M. (). ‘More on the Typology of Inchoative/Causative Verb Alternations’, in B. Comrie and M. Polinsky (eds.), Causatives and Transitivity. Amsterdam: John Benjamins, –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Haspelmath, M. (). Indefinite Pronouns. Oxford: Oxford University Press. Haspelmath, M. (). ‘The Geometry of Grammatical Meaning: Semantic Maps and Cross-linguistic Comparison’, in M. Tomasello (ed.), The New Psychology of Language: Cognitive and Functional Approaches to Language Structure. Mahwah, NJ: Erlbaum, –. Haspelmath, M. (). ‘Argument Marking in Ditransitive Alignment Types’, Linguistic Discovery : –. Haspelmath, M. (). ‘Against Markedness (and What to Replace It With)’, Journal of Linguistics : –. Haspelmath, M. (). ‘Pre-established Categories Don’t Exist: Consequences for Language Description and Typology’, Linguistic Typology : –. Haspelmath, M. (). ‘Frequency vs. Iconicity in Explaining Grammatical Asymmetries’, Cognitive Linguistics : –. Haspelmath, M. (). ‘Comparative Concepts and Descriptive Categories in Crosslinguistic Studies’, Language : –. Haspelmath, M. (). ‘On S, A, P, T, and R as Comparative Concepts for Alignment Typology’, Linguistic Typology : –. Haspelmath, M. (). ‘Ditransitive Constructions: The Verb “Give”’, in Dryer and Haspelmath (: http://wals.info/chapter/). Haspelmath, M. (a). ‘Ditransitive Constructions’, Annual Review of Linguistics : –. Haspelmath, M. (b). ‘Transitivity Prominence’, in Malchukov and Comrie (: –). Haspelmath, M., Calude, A., Spagnol, M., Narrog, H., and Bamyaci, E. (). ‘Coding Causal–Noncausal Verb Alternations: A Form–Frequency Correspondence Explanation’, Journal of Linguistics : –. Haspelmath, M., Dryer, M. S., Gil, D., and Comrie B. (eds.) (). The World Atlas of Language Structures. Oxford: Oxford University Press. Hawkins, J. A. (). ‘On Implicational and Distributional Universals of Word Order’, Journal of Linguistics : –. Hawkins, J. A. (). Word Order Universals. New York: Academic Press. Hawkins, J. A. (). ‘Seeking Motives for Change in Typological Variation’, in Croft et al. (: –). Hawkins, J. A. (). A Performance Theory of Order and Constituency. Cambridge: Cambridge University Press. Hawkins, J. A. (). ‘Symmetries and Asymmetries: Their Grammar, Typology and Parsing’, Theoretical Linguistics : –. Hawkins, J. A. (). Efficiency and Complexity in Grammars. Oxford: Oxford University Press. Hawkins, J. A. (). ‘An Asymmetry between VO and OV Languages: The Ordering of Obliques’, in G. G. Corbett and M. Noonan (eds.), Case and



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Grammatical Relations: Studies in Honor of Bernard Comrie. Amsterdam: John Benjamins, –. Hawkins, J. A. (). Cross-Linguistic Variation and Efficiency. Oxford: Oxford University Press. Hill, D. (). ‘“Mirativity” Does Not Exist: h̩dug in “Lhasa” Tibetan and Other Suspects’, Linguistic Typology : –. Himmelmann, N. P. (). ‘Towards a Typology of Typologies’, Sprachtypologie und Universalienforschung : –. Hirst, D. and Di Cristo, A. (). Intonation Systems: A Survey of Twenty Languages. Cambridge: Cambridge University Press. Holman, E., Schulze, C., Stauffer, D., and Wichmann, S. (). ‘On the Relation between Structural Diversity and Geographical Distance among Languages: Observations and Computer Simulations’, Linguistic Typology : –. Holmer, A. [n.d.]. ‘Relativization in Formosan Languages: A Greenbergian Nightmare’. Available at: http://conference.sol.lu.se/uploads/media/. Holmer_.pdf. Hopper, P. J. and Thompson, S. A. (). ‘Transitivity in Grammar and Discourse’, Language : –. Hopper, P. J. and Thompson, S. A. (eds.) (). Studies in Transitivity: Syntax and Semantics. New York: Academic Press. Horn, W. (). Sprachkörper und Sprachfunktion. Berlin: Mayer & Müller. Hunter, P. J. and Prideaux, G. D. (). ‘Empirical Constraints on the VerbParticle Construction in English’, Journal of the Atlantic Provinces Linguistic Association : –. Hyman, L. (). A Theory of Phonological Weight. Dordrecht: Foris Publications. Hyman, L. (). ‘Where is Phonology in Typology?’, Linguistic Typology : –. Hyman, L. (). ‘Universals in Phonology’, The Linguistic Review : –. Hyman, L. (). ‘Are There Really No Syllables in Gokana? Or: What’s so Great about Being Universal?’, Phonology : –. Jakobson, R. ( []). ‘Shifters, Verbal Categories, and the Russian Verb’, in Roman Jakobson: Selected Writings, vol. . The Hague: Mouton, –. Jakobson, R. (). ‘Typological Studies and Their Contribution to Historical Comparative Linguistics’, in E. Siversten, C. H. Borgstrm, A. Gallis, and A. Sommerfelt (eds.), Actes du huitième congrès international des linguistes/ Proceedings of the Eighth International Congress of Linguists. Oslo: Oslo University Press, –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Jakobson, R. ( []). Child Language Aphasia and Phonological Universals. The Hague: Mouton. Jespersen, O. (). Phonetische Grundfragen. Leipzig: Teubner. Jun, S.-A. (ed.) (). Prosodic Typology: The Phonology of Intonation and Phrasing. Oxford: Oxford University Press. Kawasaki, H. (). ‘An Acoustical Basis for Universal Constraints on Sound Sequences’, Ph.D. diss., University of California, Berkeley. Kay, P. and Maffi, L. (). ‘Number of Basic Colour Categories’, in Dryer and Haspelmath (: http://wals.info/chapter/). Kayne, R. (). The Antisymmetry of Syntax. Cambridge, MA: MIT Press. Keenan, E. L. (). ‘Towards a Universal Definition of “Subject”’, in C. N. Li (ed.), Subject and Topic. New York: Academic Press, –. Keenan, E. L. (). ‘Language Variation and the Logical Structure of Universal Grammar’, in H. Seiler (ed.), Language Universals. Tübingen: Gunter Narr, –. Keenan, E. L. (). ‘On Surface Form and Logical Form’, Studies in Linguistic Sciences : –. Keenan, E. L. (). ‘Semantic Correlates of the Ergative/Absolutive Distinction’, Linguistics : –. Keenan, E. L. and Comrie, B. (). ‘Noun Phrase Accessibility and Universal Grammar’, Linguistic Inquiry : –. Keenan, E. L. and Comrie, B. (). ‘Data on the Noun Phrase Accessibility Hierarchy’, Language : –. Keenan, E. L. and Dryer, M. S. (). ‘Passive in the World’s Languages’, in T. Shopen (ed.), Language Typology and Syntactic Description, vol. . nd edn. Cambridge: Cambridge University Press, –. Kemmer, S. (). The Middle Voice. Amsterdam: John Benjamins. Kittilä, S. (). ‘Transitivity Typology’, in Song (: –). Labov, W. (). ‘The Boundaries of Words and Their Meanings’, in C.-J. Bailey and R. W. Shuy (eds.), New Ways of Analyzing Variation in English. Washington, DC: Georgetown University Press, –. Lass, R. (). On Explaining Language Change. Cambridge: Cambridge University Press. Lass, R. (). Historical Linguistics and Language Change. Cambridge: Cambridge University Press. Lehmann, W. P. (). ‘A Structural Principle of Language and Its Implications’, Language : –. Lehmann, W. P. (ed.) (a). Syntactic Typology: Studies in the Phenomenology of Language. Austin: University of Texas Press. Lehmann, W. P. (b). ‘The Great Underlying Ground-Plans’, in Lehmann (a: –).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Lehmann, W. P. (c). ‘Conclusion: Toward an Understanding of the Profound Unity Underlying Languages’, in Lehmann (a: –). Lewis, M. P. (). Ethnologue: Languages of the World. th edn. Dallas: SIL International, at www.ethnologue.com/ethno_docs/distribution.asp? by=size. Lewis, M. P., Simons, G. F., and Fennig, C. D. (eds.) (). Ethnologue: Languages of the World. th edn. Dallas: SIL International. Lewy, E. (). ‘Der Bau der europäischen Sprachen’, in Proceedings of the Royal Irish Academy, vol. , section C, no. . Dublin: Hodges, –. Lichtenberk, F. (). A Grammar of Toqabaqita. Berlin: Mouton de Gruyter. Lightfoot, D. W. (). The Language Lottery: Toward a Biology of Grammars. Cambridge, MA: MIT Press. Lindblom, B. (). ‘Economy of Speech Gestures’, in P. F. MacNeilage (ed.), The Production of Speech. New York: Springer, –. Lindblom, B. (). ‘Developmental Origins of Adult Phonology: The Interplay between Phonetic Emergents and Evolutionary Adaptations’, Phonetica : –. Lindblom, B. and Maddieson, I. (). ‘Phonetic Universals in Consonant Systems’, in L. M. Hyman and C. N. Li (eds.), Language, Speech and Mind. London: Routledge, –. Longacre, R. E. (). ‘Discourse Typology in Relation to Language Typology’, in S. Allén (ed.), Text Processing, Text Analysis and Generation, Text Typology and Attribution: Proceedings of Nobel Symposium . Stockholm: Almqvist & Wiksell International, –. Lunsford, W. A. (). ‘An Overview of Linguistic Structures in Torwali, a Language of Northern Pakistan’, M.A. thesis, University of Texas at Arlington. McCarthy, J. J. (). A Thematic Guide to Optimality Theory. Cambridge: Cambridge University Press. Maddieson, I. (). Patterns of Sounds. Cambridge: Cambridge University Press. Maddieson, I. (). ‘Phonetic Universals’, in W. Hardcastle and J. Laver (eds.), The Handbook of Phonetic Sciences. Oxford: Blackwell, –. Maddieson, I. (). ‘Correlating Phonological Complexity: Data and Validation’, Linguistic Typology : –. Maddieson, I. (). ‘Issues of Phonological Complexity: Statistical Analysis of the Relationship between Syllable Structures, Segment Inventories, and Tone Contrasts’, in M-J. Solé, P. S. Beddor, and M. Ohala (eds.), Experimental Approaches to Phonology. Oxford: Oxford University Press, –. Maddieson, I. (). ‘Typology of Phonological Systems’, in Song (: –).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Maddieson, I. (a). ‘Consonant Inventories’, in Dryer and Haspelmath (: http://wals.info/chapter/). Maddieson, I. (b). ‘Vowel Quality Inventories’, in Dryer and Haspelmath (: http://wals.info/chapter/). Maddieson, I. (c). ‘Consonant-Vowel Ratio’, in Dryer and Haspelmath (: http://wals.info/chapter/). Maddieson, I. (d). ‘Absence of Common Consonants’, in Dryer and Haspelmath (: http://wals.info/chapter/). Maddieson, I. (e). ‘Presence of Uncommon Consonants’, in Dryer and Haspelmath (: http://wals.info/chapter/). Maddieson, I. (f). ‘Syllable Structure’, in Dryer and Haspelmath (: http://wals.info/chapter/). Maddieson, I. (g). ‘Tone’, in Dryer and Haspelmath (: http://wals. info/chapter/). Malchukov, A. (). ‘Valency Classes and Alternations: Parameters of Variation’, in Malchukov and Comrie (: –). Malchukov, A. and Comrie, B. (eds.) (). Valency Classes in the World’s Languages, vol. : A Comparative Handbook. Berlin: Mouton de Gruyter. Malchukov, A. Haspelmath, M., and Comrie, B. (). ‘Ditransitive Constructions: A Typological Overview’, in A. Malchukov, M. Haspelmath, and B. Comrie (eds.), Studies in Ditransitive Constructions: A Comparative Handbook. Berlin: Mouton de Gruyter, –. Mallinson, G. and Blake, B. J. (). Language Typology: Cross-linguistic Studies in Syntax. Amsterdam: North-Holland. Manning, A. D. and Parker, F. (). ‘The SOV > . . . > OSV Frequency Hierarchy’, Language Sciences : –. Masica, C. (). Defining a Linguistic Area: South Asia. Chicago: University of Chicago Press. Maslova, E. (). ‘A Dynamic Approach to the Verification of Distributional Universals’, Linguistic Typology : –. Massam, D. (). ‘Noun Incorporation: Essentials and Extensions’, Language and Linguistics Compass : –. Matsumoto, Y. (). Noun-Modifying Constructions in Japanese: A FrameSemantic Approach. Amsterdam: John Benjamins. Meillet, A. (). ‘Introduction’, in A. Meillet and M. Cohen (eds.), Les langues du monde. Paris: Champion, –. Michaelis, S. M., Maurer, P., Haspelmath, M., and Huber, M. (eds.) (). The Atlas and Survey of Pidgin and Creole Languages. Oxford: Oxford University Press. Miner, K. L. (). ‘Noun Stripping and Loose Noun Incorporation’, International Journal of American Linguistics : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Miner, K. L. (). ‘A Note on Noun Stripping’, International Journal of American Linguistics : –. Mithun, M. (). ‘The Evolution of Noun Incorporation’, Language : –. Mithun, M. (). ‘On the Nature of Noun Incorporation’, Language : –. Mithun, M. (). ‘Active/Agentive Case Marking and Its Motivations’, Language : –. Mithun, M. (). ‘Is Basic Word Order Universal?’, in D. L. Payne (ed.), Pragmatics of Word Order Flexibility. Amsterdam: John Benjamins, –. Moravcsik, E. A. (). ‘Agreement’, in J. H. Greenberg, C. A. Ferguson, and E. A. Moravcsik (eds.), Universals of Human Language, vol. . Stanford, CA: Stanford University Press, –. Moravcsik, E. A. (). ‘Review of M. Shibatani and T. Bynon (eds.), Approaches to Language Typology’, Linguistic Typology : –. Moravcsik, E. A. (). ‘Explaining Language Universals’, in Song (: –). Moravcsik, E. A. (). Introducing Language Typology. Cambridge: Cambridge University Press. Murdock, G. P. (). Ethnographic Atlas. Pittsburgh: University of Pittsburgh Press. Næss, A. (). Prototypical Transitivity. Amsterdam: John Benjamins. Nettle, D. and Romaine, S. (). Vanishing Voices: The Extinction of the World’s Languages. Oxford: Oxford University Press. Newmeyer, F. J. (). ‘Linguistic Typology Requires Crosslinguistic Formal Categories’, Linguistic Typology : –. Newmeyer, F. J. (). ‘On Comparative Concepts and Descriptive Categories: A Reply to Haspelmath’, Language : –. Nichols, J. (). ‘Head-marking and Dependent-marking Grammar’, Language : –. Nichols, J. (). Linguistic Diversity in Space and Time. Chicago: University of Chicago Press. Nichols, J. (). ‘Diversity and Stability in Language’, in R. D. Janda and B. D. Joseph (eds.), The Handbook of Historical Linguistics. Oxford: Blackwell, –. Nichols, J. and Bickel, B. (a). ‘Locus of Marking in the Clause’, in Dryer and Haspelmath (: http://wals.info/chapter/). Nichols, J. and Bickel, B. (b). ‘Locus of Marking in Possessive Noun Phrases’, in Dryer and Haspelmath (: http://wals.info/chapter/). Nichols, J. and Bickel, B. (c). ‘Locus of Marking: Whole-language Typology’, in Dryer and Haspelmath (: http://wals.info/chapter/).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Ohala, J. (). ‘Alternatives to the Sonority Hierarchy for Explaining Segmental Sequential Constraints’, in Papers from the Parasession on the Syllable. Chicago: Chicago Linguistic Society, –. Pagel, M. (). ‘The History, Rate and Pattern of World Linguistic Evolution’, in C. Knight, M. Studdert-Kennedy, and J. R. Hurford (eds.), The Evolutionary Emergence of Language: Social Function and the Origins of Linguistic Form. Cambridge: Cambridge University Press, –. Palmer, F. R. (). Grammatical Roles and Relations. Cambridge: Cambridge University Press. Parker, S. (ed.) (a). Sonority Controversy. Berlin: Mouton de Gruyter. Parker, S. (b). ‘Sonority Distance vs. Sonority Dispersion: A Typological Survey’, in Parker (a: –). Payne, D. L. (). ‘Review of J. A. Hawkins, Word Order Universals’, Language : –. Perkins, R. D. (). ‘The Evolution of Culture and Grammar’, Ph.D. diss., SUNY, Buffalo. Perkins, R. D. (). ‘The Covariation of Culture and Grammar’, in M. Hammond, E. A. Moravcsik, and J. R. Wirth (eds.), Studies in Syntactic Typology. Amsterdam: John Benjamins, –. Perkins, R. D. (). ‘Statistical Techniques for Determining Language Sample Size’, Studies in Language : –. Perkins, R. D. (). Deixis, Grammar, and Culture. Amsterdam: John Benjamins. Perlmutter, D. M. and Postal, P. M. (). ‘Some Proposed Laws of Basic Clause Structure’, in D. M. Perlmutter (ed.), Studies in Relational Grammar . Chicago: University of Chicago Press, –. Peterson, D. A. (). Applicative Constructions. Oxford: Oxford University Press. Pike, K. (). Tagnemic and Matrix Linguistics Applied to Selected African Languages. Norman: SIL/University of Oklahoma. Plank, F. (ed.) (). Ergativity: Towards a Theory of Grammatical Relations. London: Academic Press. Plank, F. (). ‘Hypology, Typology: The Gabelentz Puzzle’, Folia Linguistica : –. Plungian, V. (). ‘Types of Verbal Evidentiality Marking: An Overview’, in G. Diewald and E. Smirnova (eds.), Linguistic Realization of Evidentiality in European Languages. Berlin: Mouton de Gruyter, –. Polinskaya, M. S. (). ‘Object Initiality: OSV’, Linguistics : –. Polinsky, M. S. (a). ‘Antipassive Constructions’, in Dryer and Haspelmath (: http://wals.info/chapter/).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Polinsky, M. S. (b). ‘Applicative Constructions’, in Dryer and Haspelmath (: http://wals.info/chapter/). Polyakov, V. N., Solovyev, V. D., Wichmann, S., and Belyaev, O. (). ‘Using WALS and Jazyki mira’, Linguistic Typology : –. Pullum, G. K. (). ‘Desmond Derbyshire (–)’, in M. R. Wise, R. A. Dooley, and I. Murphy (eds.), Fifty Years in Brazil: A Sampler of SIL Work –. Dallas: SIL International, –. Pullum, G. K. and Wilson, D. (). ‘Autonomous Syntax and the Analysis of Auxiliaries’, Language : –. Quakenbush, J. S. (). ‘Word Order and Discourse Type: An Austronesian Example’, in D. L. Payne (ed.), Pragmatics of Word Order Flexibility. Amsterdam: John Benjamins, –. Ramat, P. (). ‘Is a Holistic Typology Possible?’, Folia Linguistica : –. Ramat, P. (). ‘Typological Comparison: Towards a Historical Perspective’, in Shibatani and Bynon (a: –). Ramat, P. (). ‘The (Early) History of Linguistic Typology’, in Song (: –). Rasinger, S. M. (). Quantitative Research in Linguistics: An Introduction. nd edn. London: Bloomsbury. Rijkhoff, J. and Bakker, D. (). ‘Language Sampling’, Linguistic Typology : –. Rijkhoff, J., Bakker, D., Hengeveld, K., and Kahrel, P. (). ‘A Method of Language Sampling’, Studies in Language : –. Rooryck, J. (a). ‘Evidentiality, Part I’, GLOT International : –. Rooryck, J. (b). ‘Evidentiality, Part II’, GLOT International : –. Rosch, E. (a). ‘On the Internal Structure of Perceptual and Semantic Categories’, in T. E. Moore (ed.), Cognitive Development and the Acquisition of Language. New York: Academic Press, –. Rosch, E. (b). ‘Natural Categories’, Cognitive Psychology : –. Rosch, E. (a). ‘Cognitive Reference Points’, Cognitive Psychology : –. Rosch, E. (b). ‘Cognitive Representations of Semantic Categories’, Journal of Experimental Psychology: General : –. Ruhlen, M. (). A Guide to the Languages of the World. Stanford, CA: Language Universals Project. Ruhlen, M. (). A Guide to the World’s Languages: Volume , Classification. Stanford, CA: Stanford University Press. Sampson, G. (). School of Linguistics. Stanford, CA: Stanford University Press. Sanders, G. A. (). A Functional Typology of Elliptical Coordinations. Bloomington: Indiana University Linguistics Club.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Sapir, E. (). Language: An Introduction to the Study of Speech. New York: Harcourt, Brace and World. Saussure, F. de (). Cours de linguistique générale. nd edn. Paris: Payot. Saville-Troike, M. (). The Ethnography of Communication. rd edn. Oxford: Blackwell. Schachter, P. (). ‘Explaining Auxiliary Order’, in F. Heny and B. Richards (eds.), Linguistic Categories: Auxiliaries and Related Puzzles, II: The Scope, Order, and Distribution of English Auxiliary Verbs. Dordrecht: D. Reidel, –. Schwartz, J.-L., Boë, L.-J., Vallée, N., and Abry, C. (). ‘Major Trends in Vowel System Inventories’, Journal of Phonetics : –. Shibatani, M. and Bynon, T. (eds.) (a). Approaches to Language Typology. Oxford: Clarendon Press. Shibatani, M. and Bynon, T. (b). ‘Approaches to Language Typology’, in Shibatani and Bynon (a: –). Sievers, E. (). Grundzüge der Phonetik. Leipzig: Breitkopf & Härtel. Siewierska, A. (). The Passive: A Comparative Linguistic Analysis. London: Croom Helm. Siewierska, A. (). Word Order Rules. London: Croom Helm. Siewierska, A. (). ‘Word Order Type and Alignment Type’, Sprachtypologie und Universalienforschung : –. Siewierska, A. (). ‘Person Agreement and the Determination of Alignment’, Transactions of the Philological Society : –. Siewierska, A. (). Person. Cambridge: Cambridge University Press. Siewierska, A. (). ‘Passive Constructions’, in Dryer and Haspelmath (: http://wals.info/chapter/). Siewierska, A. and Bakker, D. (). ‘The Distribution of Subject and Object Agreement and Word Order Type’, Studies in Language : –. Silverstein, M. (). ‘Hierarchy of Features and Ergativity’, in Dixon (: –). Song, J. J. (). ‘On Tomlin, and Manning and Parker on Basic Word Order’, Language Sciences : –. Song, J. J. (). Causatives and Causation: A Universal-Typological Perspective. London: Addison Wesley Longman. Song, J. J. (). Linguistic Typology: Morphology and Syntax. Harlow: Pearson (Longman). Song, J. J. (ed.) (). The Oxford Handbook of Linguistic Typology. Oxford: Oxford University Press. Song, J. J. (). Word Order. Cambridge: Cambridge University Press. Song, J. J. (a). ‘Periphrastic Causative Constructions’, in Dryer and Haspelmath (: http://wals.info/chapter/).



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Song, J. J. (b). ‘Nonperiphrastic Causative Constructions’, in Dryer and Haspelmath (: http://wals.info/chapter/). Speas, M. (a). ‘Evidentiality, Logophoricity and the Syntactic Representation of Pragmatic Features’, Lingua : –. Speas, M. (b). ‘Evidential Paradigms, World Variables and Person Agreement Features’, Rivista di Linguistica : –. Speas, P. (). ‘On the Syntax and Semantics of Evidentials’, Language and Linguistics Compass : –. Sposato, A. (). ‘Word Order in Miao-Yao’, Linguistic Typology : –. Stassen, L. (). Comparison and Universal Grammar. Oxford: Blackwell. Stassen, L. (). Intransitive Predication. Oxford: Clarendon Press. Stassen, L. (). ‘The Problem of Cross-Linguistic Identification’, in Song (: –). Stassen, L. (). ‘Comparative Constructions’, in Dryer and Haspelmath (: http://wals.info/chapter/). Stolz, T. (). ‘A New Mediterraneanism: Word Iteration in an Areal Perspective: A Pilot Study’, Mediterranean Language Review : –. Stolz, T. (). ‘Harry Potter Meets Le Petit Prince: On the Usefulness of Parallel Corpora in Crosslinguistic Investigations’, Sprachtypologie und Universalienforschung : –. Stolz, T., Stroh, C., and Urdze, A. (). ‘Comitatives and Instrumentals’, in Dryer and Haspelmath (: http://wals.info/chapter/). Sung, L.-M. [n.d.]. ‘Clausal Nominalization in Budai Rukai’. Available at: http:// www.engl.polyu.edu.hk/research/nomz/files/SUN.Budai%Rukai.pdf. Taylor, J. R. (). Cognitive Grammar. nd edn. Oxford: Oxford University Press. Thomason, S. G. (). Language Contact. Edinburgh: University of Edinburgh Press. Thomason, S. G. and Kaufman, T. (). Language Contact, Creolization, and Genetic Linguistics. Berkeley: University of California Press. Tiersma, P. (). ‘Local and General Markedness’, Language : –. Tomić, O. M. (). Balkan Sprachbund Morpho-Syntactic Features. Dordrecht: Springer. Tomlin, R. (). Basic Word Order: Functional Principles. London: Croom Helm. Trubetzkoy [Trubeckoj], N. S. (). Principles of Phonology, trans. C. A. M. Baltaxe. Berkeley: University of California Press. Trudgill, P. (). Sociolinguistic Typology: Social Determinants of Linguistic Complexity. Oxford: Oxford University Press.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Tsunoda, T. (). ‘Remarks on Transitivity’, Journal of Linguistics : –. Tsunoda, T. (). ‘The Hierarchy of Two-Place Predicates: Its Limitations and Uses’, in Malchukov and Comrie (: –). Ultan, R. (). ‘Some General Characteristics of Interrogative Systems’, Working Papers on Language Universals : –. van der Auwera, J. (). ‘In Defence of Classical Semantic Maps’, Theoretical Linguistics : –. van der Auwera, J. and Malchukov, A. (). ‘A Semantic Map for Depictive Adjectivals: Secondary Predication and Adverbial Modification’, in N. P. Himmelmann and E. Schultze-Berndt (eds.), The Typology of Depictive Constructions. Oxford: Oxford University Press, –. van der Auwera, J. and Plungian, V. (). ‘Modality’s Semantic Map’, Linguistic Typology : –. Van Valin, R. D., Jr. and LaPolla, R. J. (). Syntax: Structure, Meaning and Function. Cambridge: Cambridge University Press. Vennemann, T. (). ‘Explanation in Syntax’, in J. P. Kimball (ed.), Syntax and Semantics . New York: Seminar Press, –. Vennemann, T. (). ‘Analogy in Generative Grammar: The Origin of Word Order’, in L. Heilmann (ed.), Proceedings of the Eleventh International Congress of Linguists. Bologna: Il Mulino, –. Voegelin, C. F. and Voegelin, F. M. (). Classification and Index of the World’s Languages. New York: Elsevier. Wälchli, B. (). Co-compounds and Natural Coordination. Oxford: Oxford University Press. Wälchli, B. (). ‘Typology of Light and Heavy “again”, or, the Eternal Return of the Same’, Studies in Language : –. Weinreich, U. (). Languages in Contact: Findings and Problems. The Hague: Mouton. Whaley, L. J. (). Introduction to Typology: The Unity and Diversity of Language. Thousand Oaks, CA: SAGE Publications. Wichmann, S. (). ‘Statistical Observations on Implicational (Verb) Hierarchies’, in Malchukov and Comrie (: –). Wierzbicka, A. (). ‘Case Marking and Human Nature’, Australian Journal of Linguistics : –. Willett, T. (). ‘A Cross-Linguistic Survey of the Grammaticalization of Evidentiality’, Studies in Language : –. Winford, D. (). An Introduction to Contact Linguistics. Oxford: Blackwell. Witkowski, S. R. and Brown, C. H. (). ‘Marking-Reversals and Cultural Importance’, Language : –.



OUP CORRECTED PROOF – FINAL, 20/11/2017, SPi

REFERENCES

Wittgenstein, L. ( []). Philosophical Investigations, trans. G. E. M. Anscombe. Oxford: Blackwell. Yeon, J. and Brown, L. (). Korean: A Comprehensive Grammar. London: Routledge. Yip, M. (). Tone. Cambridge: Cambridge University Press. Zipf, G. K. (). The Psychobiology of Language. Boston: Houghton Mifflin.



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Author index Note: Q refers to questions. Abney, S. R.  n.  Abry, C.  Aikhenvald, A. Y. , , ,  n. , , , , , , , , , ,  Anderson, L. B.  Ariel, M.  Aristotle  Bakker, D.  n. ,  n. , –, –, ,  Q. , , ,  Bakker, P. , , ,  Bamyaci, E. ,  Bell, A. , , ,  Belyaev, O.  n. ,  Q.  Bhat, D. N. S.  Bickel, B. xiii–xiv, ,  n. , , , , , , , ,  Q. , , ,  n. , , , , ,  Blake, B. J. xiv, , , , , , ,  n. , , , , ,  n. , , , , ,  Blansitt, E. L. ,  Blevins, J.  n. , , ,  Boas, F.  Boë, L.-J.  Bopp, F. ,  Breen, G.  Brown, C. H.  Q.  Brown, L.  Q.  Brugman, C.  Brugmann, K.  n.  Bybee, J. L. , , , , –,  Bynon, T. ,  Calude, A. ,  Campbell, L. ,  Q. , , ,  Cavalli-Sforza, L. L. , 

Chafe, W.  Chaney, R. P.  n.  Chomsky, N.  n. , , ,  n. , –,  Chowdhury, M.  Cinque, G. xiv,  n.  Clark, L.  Q.  Clements, G. N. ,  Collinge, N. E.  n.  Comrie, B. xiv,  n. , , , –, –, –,  n. , , , , ,  Q. ,  Q. , , , , , , , , , , , , , , , , , , , , , , , , ,  Q. , ,  n. ,  Cooreman, A.  Corbett, G. G. ,  n. ,  n. , , , – Cristofaro, S.  Croft, W. xiii, xiv, –, , , , , , , , , , , , –, ,  n. , , , ,  Cysouw, M. , , , –, , , , , , , , , , , , , ,  n. , –,  Dahl, Ö. , –, ,  de Haan, F.  DeLancey, S.  n. , , –, , , ,  Q. ,  Q.  Denning, K.  Derbyshire, D. C. ,  Di Cristo, A.  Dimitriadis, A.  Dixon, R. M. W. , ,  n. , ,  n. , , , ,  n. , , , , , , , , ,  Donohue, M. 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

AUTHOR INDEX

Downing, P. A.  Q.  Dressler, W.  Driver, H. E.  n.  Dryer, M. S. xiv,  n. , , –, , , , , , , ,  n. , , , , , –, , , , ,  n. , , –, , –,  Q. ,  Q. , –, , , , ,  n. , , –, –, , ,  n. , , , , , , , , , , , , , , , , ,  Q. ,  Q. , , , ,  n. , , , – Du Feu, V.  Q.  Evans, N. , , , , , ,  Everaert, M.  Fennig, C. D. , ,  n. , , , , , ,  Ferguson, C. A. , , – Finck, F. N. – Fox, A.  Gabelentz, G. von der –, ,  Q.  Gary, J. O.  Gazdar, G.  Geeraerts, D.  Gerdts, D. B.  Gil, D. ,  Q. ,  Q. , , , ,  Girard, G. – Givón. T. ,  Goedemans, R. , – Göksel, A.  Q.  Gordon, M. K. , ,  Graffi, G.  Greenberg, J. H. xiv, , , , , , ,  n. , , , , , –, , , , , , , , , , , , , –,  n. , , ,  Q. ,  Q. , , , , –, , , , , , , , , , , , , ,  Grenoble, L. A. ,  Q. 

Haiman, J. xiv, , ,  n. ,  Hajek, J. , , ,  n. ,  Hale, K. , ,  n.  Handschuh, C. ,  Hartmann, I.  Hashimoto, M. J. ,  Haspelmath, M. xiv, , –, ,  n. , –,  Q. ,  Q. , ,  Q. ,  Q. , , , , , –, , , , , , ,  Q. ,  Q. , ,  n. ,  n. , , , , ,  n. , , , , –, –, , , –, , ,  Hawkins, J. A. xiv, , , , ,  n. , , , , , –,  n. ,  n. , , , –, , , , ,  n. , , –, , –, , , , –, , ,  Q. , , , –,  Hengeveld, K.  n. , –, –, ,  Q. ,  Hill, D.  Q.  Himmelmann, N. P.  Hirst, D.  Holman, E.  Holmer, A.  n.  Hopper, P. J. , , , , , , , , ,  Horie, K. – Horn, W.  Huber, M.  Humboldt, W. von  Hunter, P. J.  Hyman, L. , ,  n. , , ,  n. , – Jakobson, R. –, , ,  Jesperson, O.  Jun, S.-A.  Kahrel, P.  n. , –, –, ,  Q. ,  Kaufman, T.  Q. , ,  Kawasaki, H. 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

AUTHOR INDEX

Kay, P.  Kayne, R.  Q.  Keenan, E. L. –, , –, , , , , , ,  n. , , , , , , , , ,  Q. ,  Q. ,  n. , ,  Kemmer, S. ,  Kerslake, C.  Q.  Kittilä, S.  Kuteva, T. ,  Labov, W.  LaPolla, R. J. ,  Lasnik, H.  Lass, R.  Laughren, M. –,  n.  Lehmann, W. P. –, , , –, , ,  Levinson, S. C. , , , , ,  Lewis, M. P. , ,  n. ,  n. , , , , , ,  Lewy, E.  Lichtenberk, F.  Lightfoot, D. W.  Lindblom, B. ,  Longacre, R. E.  Lunsford, W. A. 

Michaelis, S. M.  Miner, K. L.  n.  Mithun, M. , , , ,  n. , , , , ,  Moravcsik, E. A. xiii, xiv, , , , ,  Murdock, G. P. ,  n.  Musgrave, S.  Næss, A. , , , ,  Narrog, H. ,  Nettle, D. , ,  Newmeyer, F. J. ,  Nichols, J. xiv, ,  n. ,  n. , , , , , , , , , , –, , ,  n. , –,  n. , ,  Ohala, J. , , 

Macaulay, M.  McCarthy, J. J.  Maddieson, I. , , , , –, , –, –,  n. , , –, ,  Q.  Maffi, L.  Malchukov, A. ,  n. , , , , , ,  Q. , , ,  Mallinson, G. , , , , , , , , ,  n. , ,  Manning, A. D.  Masica, C.  Maslova, E. –, –, ,  Massam, D.  Matsumoto, Y.  Maurer, P.  Meillet, A. 

Pagel, M.  Pagliuca, W. ,  Palmer, F. R.  Parker, F.  Parker, S. , ,  Payne, D. L.  Pensalfini, R.  Perkins, R. D. , , –, , , , ,  Perlmutter, D. M.  Peterson, D. A. , , –,  n. , ,  Q.  Pike, K.  Plank, F. ,  Plungian, V. , ,  n. ,  n. ,  Polinskaya, M. S.  Polinsky, M. S. , , , , –, , ,  Q.  Polyakov, V. N.  n. ,  Q.  Poole, K. T. –,  Postal, P. M.  Prideaux, G. D.  Pullum, G. K. , ,  Quakenbush, J. S. 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

AUTHOR INDEX

Ramat, P. –,  Q.  Rijkhoff, J.  n. ,  n. , –, –, ,  Q. ,  Romaine, S. , ,  Rooryck, J.  Rosch, E. ,  Ruhlen, M. , ,  n. , , ,  n. , , , , , ,  Sag, I. A.  Sampson, G. , ,  Sanders, G. A.  Sapir, E.  Saussure, F. de  Saville-Troike, M.  Schachter, P.  Schlegel, A. von  Schulze, C.  Schwartz, J.-L.  Shibatani, M. ,  Sievers, E.  Siewierska, A. xiv, , , , , , –,  n. , , ,  n. , ,  n. , , , , –, , , , –, , , , , –, ,  Q. , –, ,  n. , –, –, , ,  Q.  Silverstein, M. –,  Simons, G. F. , ,  n. , , , , , ,  Smith-Stark, T. C.  Q. ,  Solovyev, V. D.  n. ,  Q.  Song, J. J. xiii, , , , , , , , , ,  Q. , , ,  Spagnol, M. ,  Speas, M.  Speas, P.  Sposato, A. ,  n.  Stassen, L. , , , , , , ,  Q. ,  Q. , ,  n. ,  Q. , ,  Q. ,  Q. ,  Q. ,  Stauffer, D.  Stolz, T. 

Stroh, C.  Sung, L.-M.  n.  Taylor, B.  Taylor, J. R. –, , ,  Thomason, S. G.  Thompson, S. A. , , , , , , , , ,  Tiersma, P.  n.  Tomić, O. M.  Q.  Tomlin, R. , , , , , , , , , –, –,  Trubetzkoy [Trubeckoj], N. S. ,  Trudgill, P. , ,  Tsunoda, T. ,  n.  Ultan, R.  Urdze, A.  Vallée, N.  van der Auwera, J. , , , ,  n. , ,  van der Hulst, H. , – Van Valin, R. D., Jr. ,  Velupillai, V. xiii Vennemann, T. –, , , , ,  Voegelin, C. F.  n.  Voegelin, F. M.  n.  Wälchli, B. ,  Weinreich, U.  Whaley, L. J. ,  Q. , ,  Wichmann, S. ,  n. ,  Q. ,  n. , ,  Wierzbicka, A.  n. ,  Willett, T.  Wilson, D.  Winford, D.  Q. ,  Witkowski, S. R.  Q.  Wittgenstein, L.  Yeon, J.  Q.  Yip, M.  Zipf, G. K. , , , , , , 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

Language index Note: Page numbers in bold refer to tables; page numbers in italic refer to figures; Q refers to questions. Abipon  Abkhaz , , , , ,  Abun  Acehnese –,  Acoma ,  African languages , , , , , , –, , , , , , , , , –, , , , , ,  n. , , , , –,  Afro-Asiatic languages  Ainu  n.  Ainu, Classical – Aleut  Algonquian languages ,  Amdo Tibetan  American Indian languages , ,  Amerind languages , ,  Amharic  Amis , , , ,  Andoke  ||Ani  Apalaí  A-Pucikwar  Arabic ,  Arabic, Classical ,  Arabic, Libyan  Aramaic  Arandic languages  Araona  Arawak  Asian languages , ,  Asmat – Athapaskan languages  Australia–New Guinea languages , , , , , , , , , , 

Australian languages  n. , , , , ,  n. ,  Austronesian languages  Q. , , ,  n. , , , , ,  Avar , , –, , ,  Awa  Babungo  Badjiri  Bai , , , ,  Bandjalang ,  Banggarla  Baniwa  Bantu languages  Banyun  Barai ,  Basque , , ,  n. , , ,  Bats ,  Bayso ,  n. , , ,  Belhare  n.  Bella Coola  n.  Bench (Gimira)  Bengali  Berik  Bété  Bezhta  n. , – Bilaan ,  Blackfoot –, – Bora  Budai Rukai  n.  Bunun  n.  Burushaski  Byansi  Cahuilla  Canela Kraho 

OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

LANGUAGE INDEX

Cantonese see Chinese, Cantonese Catalan  Caucasian languages , , , , ,  Cayuga – Central American languages ,  Central Arrernte ,  Chadic languages , – Chaha  Chalcatongo Mixtec  Q.  Chamorro , ,  Cherokee ,  Cheyenne  n.  Chibchan  n.  Chickasaw  Chinese  Q. , –, , , , , , , ,  Cantonese ,  Hakka ,  Mandarin ,  n. , , ,  Chintang  n.  Chipewyan  Chrau ,  Chukchi , –, , , ,  Chukotko-Kamchatkan languages  Chulupi  Chuvash  Coahuilteco  Cocho  Comanche  Coos – Copainalá Zoque  Cora ,  n. , , –,  Cuzco Quechua  Czech  Dahalo  Dakota ,  Danish  Desano  Diyari ,  Djaru  Djingili  Dogon – Drehu 

Dulong  Dutch ,  Dyirbal –, , , –, , –, –, , ,  East Asian languages ,  Eastern Arrernte  Eastern Pomo , –, –, ,  Emai  n.  Enets  English –, , , ,  Q. , , , , , , –,  Q. , , , , , , , , , –,  n. ,  Q. , ,  Q. , , , –,  Q. , –,  n. , , –, , , , , –, –,  Q. ,  Q. ,  Q. , , , –, –, –, , , –, ,  Q.  British English  Epena Pedee  Estonian , , – Eurasian languages , , , , , , , , , , , , , , , , , ,  Q. , ,  European languages , , , , , , , , –, –, , , ,  Evenki  n. , – Ewe  Eyak  Fasu  Fijian , , , , – Finnish , –, , , , ,  Finno-Ugric  Foe  Fongbe – French , , , , , ,  n. ,  n. , –, ,  Gé languages  Georgian , , , –, , , , 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

LANGUAGE INDEX

German , , ,  n. ,  n. , , , , , , , , ,  Germanic languages ,  Gokana  n.  Greek , , ,  Greek, Classical  Greek dialects, Asia Minor  Sílli Greek  Greenlandic Inuktitut – Guajajara ,  n.  Guaraní , – Gujarati  Q.  Gunwinggu  Gur  Hakha Lai ,  Hakka see Chinese, Hakka Halkomelem  Harar Oromo  Hausa – Hawaiian  Hayu  n.  Hebrew  Hebrew, Modern  Hindi , , , –, , , ,  Hmong-Mien see Miao-Yao languages Hua  Huichol ,  Hungarian , , , ,  Hupa  n. ,  Igbo ,  Ika  Imbabura Quechua , , ,  Indo-European , , , , , ,  Indo-Hittite languages  Indo-Iranian languages  Indo-Pacific languages  Indonesian –, , – Ingush  Iowa-Oto  Iranian languages , –

Irish  Isekiri  Italian ,  n.  Itelmen  Jakaltek  Jamul Tiipay  Japanese , , –, –,  Q. , , ,  Q. , –, –, ,  Javanese  Jéi  Ju|0 hoan  Jyarong ,  Kabardian ,  Kalkatungu , – Kannada – Karitiâna ,  Karo-Batak  Karok  n.  Kartvelian languages  Kashaya  Kashmiri  Kati  Kayardild ,  Ket ,  Kewa  Khanty  Kharia  Khasi , , ,  Khmer ,  Khoisan languages  Kinyarwanda , –, , – Kiribatese  Kiribati –,  Kobon – Kolami – Kolana  Kongo  Korean –, ,  Q. , –, , , , , , –,  Q. , –, , ,  Koryak  Kosraean , , 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

LANGUAGE INDEX

Koyra Chiini  n.  Krongo – Kuman  Kunjen  Kusaiean see Kosraean Kutenai ,  Kwakiutl  Ladakhi  Lahu ,  Q.  Lak  Lakhota ,  Lamang  Lango , ,  Latin , ,  Q. , , , , , , , , ,  Latin, classical  Latvian , – Lavukaleve  Laz  Lezgian  Lhasa Tibetan  Q.  Lithuanian , –,  Lower Umpqua  Luganda ,  Madurese  Maidu  Malagasy , , , ,  Malakmalak  Malay  Malayalam – Mam  Mamaindé – Mandarin see Chinese, Mandarin Mande  Mandinka  n.  Mangarayi  Mapudungan  Maranao ,  Marathi  Margi  Q.  Maricopa , –,  Marúbo – Mauritian Creole 

Maxakalí  Mayan languages , ,  Mba – Mbabaram  Mbay ,  Mekeo  Meso-American languages  Q. , ,  Miao-Yao languages ,  n.  Middle Atlas Berber  Mien  Minangkabau  Mixtec  n.  Mohawk  Mongolian ,  Mon-Khmer languages  Moroccan Berber  Motuna  Q.  Mparntwe Arrernte ,  Mundurukú  Múra Pirahã – Murle  Nadëb  Nambikuára  Nanai  Nenets  Nepali ,  New Guinea , , , , , , , , , , , –; see also Australia–New Guinea New World  Newari  Nez Perce , , ,  Nganasan , –, ,  Ngandi – Ngarluma  Q.  Nhanda ,  Nias  Niger-Congo languages , , ,  Niger-Kordofanian  Nilotic languages ,  Niuean , , , ,  Q.  Nivkh (Gilyak) ,  Nocte 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

LANGUAGE INDEX

Nomatsiguenga  Q.  North American languages , , –, , , , , , , –, , , , , , , , ,  Q. , , , , , , , , , , ,  North Frisian  Northeast Caucasian  Northern Embera  Northern Pauite  Northern Tepehuan ,  n. 

Pomoan languages  Ponca  Portuguese  Potawatomi  Proto-Indo-European  Proto-Pamir ,  Qafar – Q’eqchi’ –,  Qiang , –, – Quechua , – Quiché , , – Quileute 

Oceanic languages , , , , , , , , ,  Q. , , , , ,  Oirata  Ojibwa  n.  Oksapmin –, ,  Oñati Basque  Oneida  Onondaga  Oroch  Paamese  Pacific languages , , , , , ,  Páez  Palauan  n. , , , ,  Palikur  Pama-Nyungan languages  Panare  Panyjima  Papuan languages ,  Päri  Paumarí ,  Persian  Philippine languages ,  Pintupi  Pintupi-Luritja –,  Pipil  Q.  Pirahã , , , ,  Pitta-Pitta – Plains Cree , –,  n. , ,  Polish , , , –

Rapanui  Q.  Rembarnga  Retuarã  Romance languages ,  Romanian ,  Roncalese Basque  Rotokas , , ,  Roviana  Rukai  Rushan – Russian , , , , , ,  Sahaptin –,  Sahu  Sakana Basque  Salishan languages  Salt-Yui  Sanskrit ,  Q.  Sanuma  Sema  Semelai  Semitic  n. , ,  n. , , ,  Seneca  Sentani  Serbo-Croatian  Shasta – Shibacha Lisu  Shilluk – Shipibo-Konibo ,  Shona 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

LANGUAGE INDEX

Sinhala ,  Sino-Tibetan  Siona  Siouan languages  Slave , –,  Slovenian – Songhai (or Sonrai)  South American languages , , –, , , , , , , –, , , , , , , , ,  Q. , , , , , , ,  South-Asian languages ,  South-East Asian languages , , , , , , , , , , , , , , , , , , , ,  Southern Tepehuan  Southern Tiwa , , – Spanish , , , –, , ,  n. , , –,  Q. , , , , , ,  Q.  Spanish, South American  Spoken Eastern Armenian  Sumerian  Svan  Swahili , , –, , –, ,  Q. , –, ,  Q. ,  Taba  Tagalog  n.  Taiap  Tamil – Tamil, Classical  Tarahumara  Tariana –, , , , , , , ,  Tashlhiyt  Tauya  Tawala  Tenejapa Tzeltal  Q.  Tetelcingo Nahuatl  Tetun  Thai , , , ,  n.  Tibetan 

Tibeto-Burman languages  n. , , ,  Ticuna  Tigak  Tiwi  Tlapanec  Tlingit  Toba Batak  Tolai  Tongan  Tonkawa  Torwali ,  Trukese – Trumai ,  Tsafiki , – Tsimshian  Tsou  n.  Tubu  Tucano –, , ,  Tukang Besi  Tungus , ,  Tupinambá  Turkana ,  Turkic languages ,  Turkish –, ,  Q. , , , , , , , , ,  Turkmen  Q.  Tuvan – Tuyuca  Tuyuka  Tzotzil –, ,  Tzutujil  Ulcha  Una  Uradhi  Uralic languages ,  Urdu ,  Urhobo  Urubu-Kaapor ,  Usarufa  Q.  Ute ,  Uto-Aztecan languages  Vietnamese 



OUP CORRECTED PROOF – FINAL, 27/11/2017, SPi

LANGUAGE INDEX

Wambaya  Wangkumara –,  Wanka Quechua , –, ,  Warao  Waray  Warekena  Wari  Warlpiri –, –,  n. ,  Warrwa – Washo  Waskia  Waurá  Welsh  n. , , , – Weri  Wichita  Winnebago  Wintu  Woleaian  Wolof , 

Xokleng ,  !Xóõ ,  n.  Yagua ,  n.  Yalarnnga , –, , – Yamphu  n.  Yanomámi  Yapese  Yareba  Yidiny ,  Yimas  Yindjibarndi ,  Yoruba ,  Yucatec – Yuchi  Yuman languages  Yurok  n.  Zuni 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

Subject index Note: Page numbers in bold refer to tables; page numbers in italic refer to figures; Q refers to questions. absolutive case , , , , , –, , –, , ; see also ergative–absolutive alignment Accessibility Hierarchy –, , , ,  Q.  accusative alignment ,  Q. ,  accusative case –, , , , , , , , –, ; see also nominative–accusative alignment active , , –, ,  Q.  active–stative alignment , , –, –, , –,  n. , , , –, , , , , , , , ,  active–stative P-alignment –,  actor –, – adjectives , ,  Q. , , , –, , –, –, , ,  adjuncts –, , , , –, , , –, , , , , , –,  adpositions –, –, , , , , , ,  n. , –, , –,  adverbs  affirmation ,  affixes  Q. , , –,  n. , , ,  applicative affixes –,  causative affixes –, –,  Q.  evidential affixes – person-marking affixes , , , 

agency (or agentivity) ,  Q. , , , , – Agency Hierarchy , –,  n.  agentive–patientive alignment see active– stative alignment agents , –, –, , , –, , , , , , , –, ,  agent phrases –,  case marking , –, –,  agglutinative languages ,  n.  agreement , , –, ,  n. , –, , –,  case  n. , , –, , ,  person agreement  n. , , , , , –, ,  verbal agreement , –, , –, , , –, –, , , –, , , , ,  Q. , , , –, , , ,  alignment , , –; see also accusative alignment, case alignment analogous languages  analogy ,  anaphoric –,  animacy , , –,  Q. , , ,  n. , , , –, , , , , ,  Q. ,  Animacy Hierarchy , –,  n.  Animated First Principle –, – anticausatives (or non-causatives) – antipassives , , , , –, , , , , 

OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

antipassivization , , , , ,  aorist tense – applicatives –, –, ,  Q. ,  Q. ,  Q.  approximants ,  areal bias , , –, , , , , , ,  Q. , – areal diffusion , , , –, , , –, –,  areal distance ,  areal distribution , , , , , –, , , , , –, , , , –,  Q.  areal stratification –, ,  n.  areal typology , , , , –, , –,  areal word order typology – argument addition –, , –, , – argument density control – argument marking , , , –, –, –, –, –, , –,  Q. , – Argument Precedence  n.  argument reduction , –, ,  argument roles –, , , , , , –, , –, ,  argument structure , , , , – articles , ,  articulatory difficulty ,  n. , ,  aspect –, , , , , , , –,  Q. , –, ,  Q. ,  Q.  atelic verbs ,  Atlas of Pidgin and Creole Language Structures Online (APiCS Online)  attention flow –, ,  Q.  AUTOTYP  auxiliary verbs  Q. , , –, , , , –, 

basic colour categories/terms  basic word order , –, , , , , , –, –, –, , –,  benefactives ,  beneficiary ,  n. , , ,  n. , , –,  bias –, ,  bibliographical bias ,  cultural bias –,  typological bias , , , ,  see also areal bias, genealogical bias bilabials  borrowing , ,  Branching Direction Theory (BDT) –,  Q. ,  Q.  case see absolutive case, accusative case, dative case, ergative case, genitive case, nominative case, partitive case case alignment ,  n. , , , –, –, ,  and applicatives ,  n.  and person marking , –,  and word order , –,  see also active–stative alignment, double oblique alignment, ergative– absolutive alignment, hierarchical alignment, horizontal alignment, indirective alignment, neutral alignment, nominative–accusative alignment, P-alignment, S-alignment, secundative alignment, split-ergative alignment, tripartite alignment case marking , , ,  Q. , , –,  n. , , –, –, –, , , , , , , , , –, –, , –, ,  attention flow and viewpoint –, ,  Q.  discriminatory view of –,  n. , , –, ,  indexing view of , –,  and word order , –, 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

categories , , –, , ,  Q. ; see also descriptive categories, grammatical categories, lexical categories, prototypical categories categorization –, , , ,  Q.  causatives , –, , ,  n. , –, –, –, –,  Q.  causees  n. , –, – morphological (or lexical) causatives  n. , –, –, ,  Q.  syntactic causatives  n.  classification  areal classification – genealogical classification , , –, , , ,  morphological , ,  Q. ; see also language classification, typological classification classifiers ,  n.  clicks ,  clitics , , ,  n. , , –, , , , ,  Q. , , ,  coding –, –, –, , , , , , –, , , , , ,  Q. , –; see also formal coding, number coding, zero coding cognitive accessibility  colour terms see basic colour categories/ terms comitatives –, , , –,  comparative concepts –, ,  n. , –, –,  Q.  conceptual–semantic concepts ,  formal concepts ,  primitive comparative concepts  comparative constructions , , , , ,  Q. , , ,  Q. ,  Q. ,  Q. ,  Q. , , –, , –, ,  Comparative Method , , , , , 

complement clauses –,  complementizers  Q.  completive aspect ,  conceptual categories , ,  conceptual space see semantic maps consonant alternations – consonant inventories , , –, , –, –, – Size Principle , – consonant–vowel ratios –,  Q.  consonants , , , –, –, , , ,  Q.  consonant clusters  Q. , –, ,  Q. ,  Q.  constituent order , , , , ,  Constituent Recognition Domain (CRD) –, , ,  constituent structure , ,  Q. , –, , , –,  n. , , , ,  constraint ranking , – constraints , , , , –,  Q. , , ; see also grammatical constraints contact sociocultural contact ,  see also language contact control constructions ,  coreference – counter-iconic situations – cross-linguistic comparability –, –, , ,  n. , , , , , ,  cross-linguistic identification , –, –,  cultures –,  n. , –,  Darwinism  data , –, –, –,  corpus data  primary data  secondary data  data collection , , , –



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

grammars , , , ,  Q. , –, , , , , ,  Q. , –,  Q.  levels of measurement and coding – online typological databases , –, , , ,  n. ,  texts , , –, ,  Q.  working with native speakers , –, –, ,  Q. ,  Q.  data matrix –,  Q.  databases  Q. , , ,  n. , –, , , ,  n. ,  dative case ,  Q. , , ,  n. , , , , , –,  Q. ,  ‘dative’ functions –,  n. ,  Q.  dative subject  definiteness and indefiniteness  Q. ,  n.  deictics , ,  demonstratives  n.  dependent marking –,  n. , , , ,  derivational categories –, , , – descriptive categories –, –,  Q. ,  Q.  determiners ,  n.  detransitivization , , , – diachrony ,  Q.  dialects , , , –,  differential subject/object marking , – direct object –, , , –, –, –, –,  Q. ,  Q. , –,  discourse –, , , ,  discourse roles , – discourse salience , –, – distance areal ,  cultural 

genealogical , , , , ,  in semantic maps – distribution see areal distribution, typological distribution ditransitive constructions , , , –, , , , ,  Q. , , –, , , , , – diversity , , , , –, , ,  Q. , ,  genealogical diversity , –, , –,  structural diversity , –, –, –, –, , , –,  typological diversity ,  diversity value – documentation see language documentation dormant languages  n.  double oblique alignment (or accusative focus alignment) –, , ,  DP hypothesis  n.  Early Immediate Constituents Theory (EIC) –, –, , ,  Q. ,  Q. ,  Q.  economy –, –, ,  n. , , –, , – egocentrism , , , ,  enclitics , ,  endangered languages  Q. , ,  Enlightenment see Rationalism epistemic possibility ,  n.  ergative case , , , , , , , , , , , , , , ,  ergative–absolutive alignment , –, , –,  n. , –, –,  n. , , , , –, , , ,  Q. , , –, , , –, –, , ,  n. , – marked absolutive languages  ergativity 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

ethnopsychology –,  n.  events , , , , , , , , , ,  Q. , , , , – evidentiality , ,  Q.  semantic parameters –,  Q.  evidentiality distinctions ,  n. , , , –,  assumed evidential , , , –, , ,  direct evidential , , , – first-hand and non-first-hand evidence , –, – hearsay , , –, ,  inferred evidential , , , , –, –, , , , , , , , , ,  non-visual sensory , –, –, , –, , , –, , , ,  order of preference – participatory  n. ,  quotative , –, , , ,  visual , , –, , , –, , , , , ,  evidentiality marking –, –,  Q. ,  Q.  grammatical constraints on –, ,  Q.  morphological form –,  and person ,  and tense –, –,  Q.  evidentiality systems , –, ,  Q. ,  Q.  evolution see language evolution exclusive (person marking) , –, , , , , –, – existence dependency  experiencer , , , –, ,  explanation , –, , –, , , , , , –, , , , ,  Explicitness Hierarchy , , 

extended demotion  n.  external or non-linguistic factors , , , , , , , , , ,  extinct languages –, ,  n. , , , ,  family resemblance –, ,  features , , ,  Feller–Arley process  n. ,  n.  filler  filler-gap dependency  first-language acquisition ,  flagging  n.  fluid-S ,  focus (focal)  foregrounding  Q.  formal coding , , , – differential formal coding –, , –, –, ,  Q.  formal linguistics –, –, , –, , , , , – frequency , –,  Q. ,  Q. ,  Q. ,  Q. , , , , –, ,  in distribution –, –, , – fricatives , , –, , ,  Function Contiguity Hypothesis  functional linguistics , , , –, , –, , , , –,  Fundamental Principle of Placement (FPP) – fusional languages  n.  future tense  Q. ,  Q. , – games  gap  garden paths (in parsing) ,  gender , –, –, , , , , , , ,  n. ,  genealogical bias , –, , , , , , , , , ,  Q. , –



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

genealogical classification and groups , , , , –, , , , –, , –, , , –; see also genera genealogical distance , , –, ,  genealogical diversity , –, , ,  genealogical relatedness , , , , , –, –, –, , , , , ,  remote genealogical relatedness , , –, –, –, , –, , , , ,  genera –, –, , , –, –, ,  n. , , , ,  generative grammar –, , , , ,  Q. ,  Q. , ,  genitive case ,  Q.  genitives relativization –, , , –,  Q.  word order , , , ,  glides , ,  grammars , , , ,  Q. , –, , , , , ,  Q. , –,  Q.  grammatical categories , –, ,  n. , , , , , , ,  universal grammatical categories ,  see also comparative concepts, descriptive categories grammatical constraints , –, , –, , , , , , , –,  Q.  grammatical contexts , –,  grammatical relations , –, –, , –, , , – and person marking , , – grammatical relations hierarchies –, –, , –, –, –, – grammaticalization ,  group (person) –

habitual aspect ,  Head-Dependent Theory (HDT) – head-marking –, , ,  Heavy NP Shift  hierarchical alignment (or direct-inverse system) , –, –, , , , –, –, –, , , , ,  hierarchical P-alignment  hierarchies ; see also Accessibility Hierarchy, Agency Hierarchy, Animacy Hierarchy, Explicitness Hierarchy, grammatical relations hierarchies, Horizontal Homophony Hierarchies, markedness hierarchies, Nominal Hierarchy, person hierarchy, Person/Animacy Hierarchy, semantic roles hierarchy, Sonority Hierarchy, Transitivity Hierarchy historical factors , , –, , , ,  historical linguistics –, , , , , , , , , ,  homophony , –, , ,  Q.  horizontal homophony , –, , –, – singular homophony , –, , , , ,  vertical homophony , , , , –, –, , ,  horizontal alignment , , , ,  Horizontal Homophony Hierarchies –,  iconicity –, –, , , –,  Q.  idiolects  immediate matrix disambiguation –,  imperatives  imperfective aspect , –, –,  Q. 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

impersonal passives – implicational hierarchies , ,  Q. ; see also Accessibility Hierarchy, Animacy Hierarchy, grammatical relations hierarchies, Nominal Hierarchy implicational typology , , –,  Q.  implicational universals –, –, ,  Q. , –, –, –,  n. ,  inclusive (person marking) , –, –, , , , – augmented inclusive , , , , ,  minimal inclusive , , , , ,  incorporating languages  indexing  n.  indicative  indirect object –, ,  n. , , –, , –, ,  n. , –,  Q. ,  Q. , –, ,  indirective alignment  n. , –, –, ,  Q. ,  infinitives –, ,  inflection  n. ,  Q.  inflectional languages ,  n.  inflectional morphology , , ,  inner form (of language) – inpositions  n.  instrument , , ,  n.  instrumentals , , –, , –, –, , , ,  intensifiers ,  intonational typology  intransitive constructions , , –, –, , , ,  n. , , –, , , –, ,  n. , , –, – inverse coding –, , , – irrealis , ,  n.  irregular morphology –, –,  isolates  n. , ,  isolating languages ,  Q. 

Jazyki mira ‘Languages of the World’ ,  n. ,  Q.  judicantis ,  n. ,  kin terms , , ,  kinesis , –,  Q. , ,  language as a concept –,  vs dialect , , –,  language acquisition  n. , , –, , , – language change , , , –, – language classification –, , , –, ,  language contact , , –,  Q. , , , –, –, , , , –, , , , ,  remote language contact , , –, –, –, , , –,  language diversity see diversity language documentation –, , , , –, , , , , , ,  language evolution , ,  Q. , , , ,  n. ,  language families , , , , , , –, , –, ,  n. , , , ,  Q.  language family trees – language phyla (or stocks)  n. , –, , –, , , ,  Q.  language populations , –, –,  birth-and-death process –, , – type-shift process –, , ,  language samples/sampling , , , , , , –, , , –,  Q. ,  Q. , –,  and bias –, , , , , , , ,  Q. , ,  convenience (or opportunity) samples , –, , ,  n.  independence of cases –, –, –, , –,  Q. 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

probability samples , – proportional representation in – sampling procedures , , , , , –, –, –, , –,  Q.  stratification –, , ,  n. ,  variety samples –, –, , –, , ,  n.  language typology see linguistic typology language universals –, –, –,  Q. , –, , , , , , , –, –, ,  absolute language universals , , –, , ,  Q. ,  distributional universals –,  exceptionless universals ,  n. ,  n. ,  n. , – non-implicational universals , ,  restricted or conditional ,  unrestricted or unconditional ,  see also implicational universals, universal preferences language use  languages, number of ,  n.  large-scale typology –, ,  laws of language development – least effort (principle of) ,  left-branching –,  Leipzig Valency Classes Project  Leningrad Typology Group  lexical categories , ,  n.  lexical content (alignment) , –, ,  Lexical Domain (LD) , , –,  lexical properties and dependencies ,  lexicon ,  linear order –,  Q.  linguistic area see Sprachbund linguistic form  linguistic prehistory , , , – linguistic typology –, –, –, , , , –, –, , , , –, 

compared with generative grammar , –,  description of –,  historical overview of , – liquids , , , , –,  living languages – Locative Alternation  Q.  locatives , , , –, , , , ,  macroareas –, , , , , , –, –,  Q.  manner ,  markedness –, , –, –, –, –, , , ,  Q. , , , – markedness hierarchies –, – markedness reversal  n. ,  Q.  marking constituency – massively parallel texts –, ,  Maximize On-line Processing (MaOP) , , –, ,  mechanistic physics – Minimal Structural Domain – Minimalist Program (MP) , – Minimize Domains (MiD) –, , , , ,  Minimize Forms (MiF) ,  mirativity  Q.  modality ,  epistemic possibility ,  n.  participant-external modality ,  n.  participant-internal modality ,  n.  monotransitive constructions , , ,  mood –, ,  morphemes see morphological functions morphological causativization ,  n. , –, –, ,  Q.  morphological distinctions , , –, –, ,  morphological forms  evidentiality marking –, 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

morphological forms (cont.) person marking , , –, , , , , ,  Q.  morphological functions , , ,  morphological typology , ,  Q. ,  morphology –, , ,  inflectional morphology  regular/irregular morphology –, ,  verb morphology , –,  Q. ,  morphosyntax ,  mother-node-constructing categories (MNCCs) –, ,  Multidimensional Scaling (MDS) – narrative past  nasal consonants , , , , , ,  native languages ,  native speakers , –, –, ,  Q. ,  Q.  negation  negatives , ,  Neogrammarians ,  n.  neutral alignment  n. , , –, ,  P-alignment , , ,  S-alignment , ,  n. , ,  ‘new’ languages –, –,  Nominal Hierarchy ,  n. , , , , –, –, – nominalization  nominative case , , , ,  Q. , –,  nominative–accusative alignment –, –, –, ,  n. , –, –,  n. , , –, –, , , , , –,  Q. , –, , , –, , , , – marked nominative languages  non-absolute language universals see universal preferences non-causatives –

non-phrasal categories –, , , – noun (N) + adjective (A) , , , , , –, –,  noun (N) + article (Art) –, – noun (N) + genitive (G) , , , , , , –,  Q. ,  Q.  noun (N) + postposition (Po) , , , , , , , , , –, – noun (N) + relative clause (Rel) –, –, , –, , , , –, , , –, ,  n. , , , ,  Q.  noun incorporation , –, –, , , –,  noun-modifying construction – noun phrase ellipsis , , – noun stripping –,  n.  nouns  Q. , –,  Q. ,  and case alignment types , –, , , , ,  number  Q. , ,  agreement , , , –,  dual , ,  see also person/number marking number marking/coding –, –, –, , –, , ,  Q.  O (Object) , ,  object patterners , , , ,  object-initial languages , , , ; see also OSV, OVS objects , , –, , , , , –, , , , – applied object , –,  Q.  basic object , – object of comparison –, , ,  Q.  primary objects – secondary objects – see also direct object, indirect object oblique –,  n. , –, , , –, , , –,  Q. , –, , , 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

oblique case –, , ,  obstruents –,  n. , , – obviative – operands –, –,  n. ,  n. ,  operators –, –,  n. ,  n. ,  Optimality Theory , , , –,  Q. ,  OSV (Object-Subject-Verb) , –, , , , , , –, , , , , ; see also OV word order OV (Object-Verb) word order  n. , –, , –, –, , –,  and adpositions ,  n. , , –, , , –,  n. , –, – and case alignment , – and N(oun)A(djective) order , , –,  and N(oun)Art(icle) order –,  and N(oun)G(enitive) order , , ,  Q. ,  Q.  and N(oun)Rel(ative) order , –, , , , , –, , , –, , , ,  Q.  and V(erb)Aux(iliary) order , –,  OV-VO typology –, –, ,  OVS (Object-Verb-Subject) , –, , ,  Q. , , , , , , , , ; see also OV word order P-alignment , –, ,  Q.  paradigms ,  person-marking paradigms , –, –,  Q. ,  Q. ,  Q.  participants (transitivity) –, –,  Q. , , , , ,  participials , , 

particles , , –, –,  n. ,  Q. , , , ,  partitive case , ,  passives , –,  Q. , , , , –, , , , , , , ,  Q. ,  Q.  passivization –, ,  past tense , , , ,  Q. , –, , ,  Q. , –, , – patient , –, –, , –, , , , –, ,  case marking/coding , –, –, ,  Q. ,  perfect ,  perfective aspect , –,  Q.  performance , , , –, , , ; see also processing Performance–Grammar Correspondence Hypothesis (PGCH)  person ,  n. , , , , ,  Person/Animacy Hierarchy ,  n.  person hierarchy  person markers/marking , –, –, –, , , , ,  Q.  bound person markers , , , ,  Q.  clitic person markers , ,  dependent person markers , , , , –, , , –, –, , , ,  Q.  first person  n. , , , , –, , –, , –, –,  n. , , , –,  and gender , , , ,  n. ,  independent person markers , , , –, ,  n. , , –, –, –, , , , ,  Q. ,  Q. ,  Q.  and morphological form –, , –, , ,  Q. 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

person markers/marking (cont.) paradigmatic structures , –, –, ,  Q. ,  Q. ,  Q.  pure person –,  second person  n. , , , –, , –, , –,  n. , –,  n. , , – third person ,  n. , , –, , , –,  n. , –,  n. , , –,  weak person markers , –, ,  Q.  zero person markers , –, , , ,  Q.  person/number marking , , , , , –, –, –,  Q. ,  Q.  dual  first person plural or group , , , –, , –, , , , ,  first person singular , –, , –, –, , –, , –,  group – homophony , , –, –, –, –, , –, ,  Q.  inclusive/exclusive distinction –, –, , – plural – second person plural or group , –, –, , –, –, –, ,  second person singular , , –, –, , , –, , –, ,  split group marking , –, – third person plural or group , , , –, –, , –, –, –, , ,  third person singular , , , , , , , –, ,

–, –, –, , , –, –, ,  person/tense marking , , ,  personal passives –,  personal pronouns , –, –,  n. , , , , , –, , –, , –, , –,  pharyngeals , ,  PHOIBLE Online , , , ,  phonemes ,  phonological substance – phonological typology , , –, , ; see also prosodic typology, segmental typology, syllabic typology phonology , –,  phrasal categories , , –, – Phrasal Combination Domain (PCD) –, –, ,  phrase structure , –,  plosives –, , , , , , –,  polysynthetic languages  possession external possession  Q.  possessor  Q. ,  external possessor ,  n. ,  n. ,  predicative possessor , , ,  postpositions –,  n. , –, –, ,  and word order –, ,  n. ,  Q. , –, , , –, , , , , , –, , –, – poverty of stimulus , ,  Q.  pragmatic functions , , , , –, , ,  Q.  pragmatic neutrality  Prague School of Linguistics , – predicates –, ,  n. , , –, ,  predication 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

prefixes case marking –, , –, , , , , , , –, , –, , ,  person marking , –, , ,  Q.  preposition (Pr) + noun (N) , , , , , , , –, –, – prepositions –,  n. , –,  n. , –,  n. ,  n. , –,  Q. , , , ,  Q.  and word order –, ,  n. ,  Q. , –, , , –, , –, , , , , , –, , –, – present tense , , –,  Q. , , , , , , , , ,  Q.  preterite – Principle of Cross-Category Harmony (PCCH) – Principle of Early Immediate Constituents (PEIC) –, , –, , , ,  Principle of Natural Serialization (PNS) –,  Principle of Uniformitarianism – Principles and Parameters Theory –, –, , ,  processing  Early Immediate Constituents Theory –, –, , ,  Q. ,  Q. ,  Q.  ease of , –, , –,  efficiency and speed in , –, , , , ,  processing principles and domains , , –,  word order and , –, –, –, , –,  see also Maximize Online Processing, Principle of Early Immediate Constituents proclitics 

pronouns , ,  Q. , , , –, ,  n. , , , , –, ,  indefinite pronouns  Q. , ,  n.  relative pronouns  see also personal pronouns prosodic typology , – prototype theory –,  prototype in grammar –, , , –, ,  prototypes , – prototypical categories –, , –, ,  proximate – punctuality ,  Q.  question particles  questionnaires , –, ,  Q. ,  Q.  Rationalism , , ,  Q.  realis  recipient , , , –,  n. , , , –, , , , –,  reciprocal  redundancy  reference tracking  reflexives  regularity ,  morphological regularity –, –,  relative clauses , –, , , , , ,  Q.  adjoined clauses –, –,  n.  nominalized non-finite clauses  n.  noun-modifying construction – word order –, –, , , , , –, , , –, ,  n. , , , ,  Q.  relativization , –, , , , , , –, , –, –,  Q. ,  rhythmic typology 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

right-branching –,  Romanticism –, , , –,  Q.  rule-based phonology ,  n.  S (Subject) , ,  S-alignment , , , ,  Q. ,  Q.  distribution of –,  types –, , ,  variation on –,  samples/sampling see language samples/ sampling secundative alignment  n. , , –, ,  Q. , ,  segmental typology , –,  semantic concepts and definitions , –, –, , ,  Q. , ,  Q.  Semantic Map Connectivity Hypothesis  semantic maps , –,  Q. ,  Q.  semantic roles/relations , , , –, , –, , , , , , , –, ,  n. , , ,  Q. , , –, ,  Q.  semantic roles hierarchy  semantics  and case marking –, –, –,  n. , –, ,  sense dependency ,  n.  sentence patterns – sonority , – Sonority Hierarchy – Sonority Sequencing Principle –,  Q. ,  Q.  sound changes  sounds –, , , , , , , , , ,  SOV (Subject-Object-Verb) , –, , , , –, –, –, , , , , , –, –, , , ,  and adpositions –,  and N(oun)A(djective) order , –, 

and N(oun)Art(icle) order  and N(oun)G(enitive) order – and N(oun)Rel(ative) order , ,  and V(erb)Aux(iliary) order – see also OV word order specific selectional restrictions ,  n.  speech act participants , –, , –, –, ,  speech levels – split-ergative alignment , –, , , , –, , , , , ,  Q. ,  split-objectivity  split-S , ,  Sprachbund (or linguistic area)  Q. , ,  stability , , ,  stationary distribution , , , ,  statistical language universals see universal preferences stops ,  n.  stratification –, , ,  n. ,  stress , , , , – stress systems –,  Q.  StressTyp , ,  structural complexity –, –, – structural dependencies –,  structural linguistics ,  n. , ,  structural properties , –, ,  Q. , , , , –, , –, , , –, –, , , –, , , , –, , , , – structural types , –, , , , , , , ,  structural variation , , , , , –, –,  structuralism  subject-initial order , –, –, , , ,  n. , , ; see also SOV, SVO



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

subjects , , , , , –, , , , , , ,  n. , , –, –, –, –, –, –, –, , –, ,  Q. ,  Q. , , , –, –,  Subjects Front Principle ,  subjunctive  Q.  suffixes  n. , , , , , , , , ,  Q.  case marking , –, , , , –, –, , –, –, –, , , , , ,  causative , , –,  locative – passive and antipassive , –, , –, ,  person marking –, ,  see also evidentiality distinctions Summer Institute of Linguistics (SIL) Language Data  surface form – Surrey Morphology Group Databases – SVO (Subject-Verb-Object) , –, –, , , –, –, –, , , , , , , , , –, –, –, , ,  and adpositions , , ,  and N(oun)A(djective) order  and N(oun)G(enitive) order  and N(oun)Rel(ative) order , , ,  see also VO word order syllabic typology , – syllables ,  n.  and sonority –,  stressed syllables – syllable structure , –, , ,  symmetrical dependencies  synchrony 

syntactic structures , , , , , , , , –,  Syntactic Structures of the World’s Languages  syntax , , , ,  telic verbs ,  temporal distinctions , ,  Q.  tense –, , ,  Q. , , – agreement , , , , ,  and aspect –, –,  Q.  and evidentiality marking –, –,  non-future tense ,  tense/person marking , , ,  tetrachoric tables – theme , , –,  Theme First Principle –, –,  time depth –, , , , , ,  tone , –, ,  Q.  topicality or discourse salience , –, –, ,  transitivity , –, ,  Q. , –, , , –, ,  affectedness of P , –, , ,  Q. , , ,  basic transitive clauses , , , , , , –, , , –, , –,  individuation of P , , , –,  kinesis , –,  Q. , ,  volitionality , –, ,  Transitivity Hierarchy  n.  transitivity prominence ,  n.  transpositive languages  tripartite alignment  n. , ,  P-alignment , –, ,  S-alignment , –, ,  n. , , , –, , 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

typological analysis –, –, , , –, , ; see also comparative concepts, cross-linguistic comparability, cross-linguistic identification, descriptive categories typological asymmetry –, , –, , , –, –, , , , , –, , –, ,  Q.  differential formal coding –, , –, –, ,  Q.  grammatical behaviour , , –,  left–right asymmetry –, ,  see also economy, frequency, iconicity, markedness typological classifications , , , –, , , – Typological Database System Project  typological distribution –, , –, , , , –, –, , ,  Q. , –, ,  case alignment types , , –, , , –, – evidentiality marking – person marking –,  word order , –, , , –, , , , , –, , , , , –, ,  Q.  typology ,  UCLA Phonological Segment Inventory Database , , ,  unity , –, , –, –, , –,  Universal Grammar –, , , –, ,  Q.  universal preferences –, –, –, –, , –, , , , , , , , , , , ,  n.  universal principles , , , –,  universals see language universals UPSID database , , , 

V (Verb) , , , ,  valency  n. , –; see also antipassives, antipassivization, applicatives, causatives, noun incorporation, passives, passivization variables , , , , –,  Q.  categorical (or nominal) variables ,  interval variables ,  ordinal variables ,  ratio variables – variation , ,  cross-linguistic variation –, , , , –, –, , , , , , , , , –, – language-internal variation , ,  Q.  see also structural variation verb (V) + auxiliary verb (Aux) , – verb morphology , –,  Q. ,  verb particles – verb patterners , –,  verb position and adpositions ,  n. ,  Q. , , – verb-final (V-final) order ,  n. , ,  Q. , , , , , , –; see also OSV, OV word order, SOV verb-initial (V-initial) order  n. , ,  Q. , , , , , –,  n. , , , , –, ; see also VO word order, VOS, VSO and adpositions , ,  Q. , , , –, , , – verb-medial (V-medial) order , , , ; see also SVO Verb–Object Bonding Principle –, – verbal inflections ,  Q. 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

verbs , , , , , , , , , , , , , , –, – case marking , , , , –, –, , –,  n. , –, –, –, –, , –, –, –, , –, , –, –,  Q. ,  person marking , –, , –, ,  Q.  viewpoint –, ,  Q.  VO (Verb-Object) word order  n. , –, , –, , , –, ,  Q.  and adpositions  n. , , –, , , , , , –,  n. , –, – and case alignment , – and N(oun)A(djective) order , , –,  and N(oun)Art(icle) order –,  and N(oun)G(enitive) order , , ,  Q. ,  Q.  and N(oun)Rel(ative) order –, –, , , , , , , –, , , –, ,  n. , , ,  and V(erb)Aux(iliary) order , –,  voicing , –, ,  n. , , , ,  volitionality , –, ,  VOS (Verb-Object-Subject) , –, , , , , –, , , , ,  n. , –, ,  and N(oun)A(djective) order  and N(oun)Rel(ative) order ,  see also VO word order vowel alternations – vowels , , , , , –, , –, ,  Q. ,  consonant–vowel ratios –,  Q. 

oral and nasal vowels –, –,  n. , –, , – vowel harmony , ,  vowel inventories , –, , –, ,  VSO (Verb-Subject-Object) , –, , ,  n. , , , , , , , , , ,  n. , –, , ; see also VO word order and adpositions , , , ,  and N(oun)A(djective) order ,  and N(oun)G(enitive) order  and N(oun)Rel(ative) order , ,  and V(erb)Aux(iliary) order – see also VO word order WALS Online ,  Q. , ,  wh in situ  wh-movement –,  WH-questions  word classes  word order , ,  Q. , , , –,  abstract and surface word orders –, , –,  and case alignment , –,  flexible or free word order , – no dominant word order  n. , , ,  see also basic word order, linear order, O(bject), object-initial languages, OSV, OV word order, OV-VO typology, OVS, S(ubject), SOV, subject-initial order, SVO, V(erb), verb-final order, verb-initial order, verb-medial order, VO word order, VSO word order correlations or co-occurrences , , , –, –, , , , –, –, , –, 



OUP CORRECTED PROOF – FINAL, 29/11/2017, SPi

SUBJECT INDEX

word order typology , , , , –; see also areal word order typology words , ,  World Atlas of Language Structures (WALS) ,  n. ,  Q. ,  Q. , , ,  X-bar Theory –

yes–no question rule  Q.  zero coding –, –, –, –, , –, –, , , , , , –, , –, , ,  Q. ,  Q. , , ,  zero point , 

