The Oxford Handbook of Morphological Theory (Oxford Handbooks) 9780199668984, 0199668981

This volume is the first handbook devoted entirely to the multitude of frameworks adopted in the field of morphology, in

677 190 9MB

English Pages 752 [751]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Oxford Handbook of Morphological Theory (Oxford Handbooks)
 9780199668984, 0199668981

Table of contents :
Cover
The oxford handbook of morphological theory
Copyright
Dedication
Contents
Acknowledgements
List of Abbreviations
The Contributors
Chapter 1: Introduction: Theory and theories in morphology
1.1 Welcome
1.1.1 About the volume
1.2 Morphologocal theories
1.2.1 What is the goal of morphology theory?
1.2.2 Where is morphology?
1.2.3 Basic units and processes
1.2.4 Morphology and syntax
1.2.5 Morphology and semantics
1.2.6 Morphology and phonology
1.2.7 Morphology and the lexicon
1.2.8 Taxonomies of theories
1.3 The structure of the handbook
1.3.1 Part I: Issues in morphology
1.3.2 Part II: Morphological theories
1.3.3 Part III: Morphological theory and other fields
1.4 Conclusion
PART I: ISSUES IN MORPHOLOGY
Chapter 2: A short history of morphological theory
2.1 Antecedents of generative morphology
2.1.1 Edward Sapir
2.1.2 Leonard Bloomfield
2.1.3 Classical American structuralism
2.2 Morphology in classical generative grammar
2.2.1 Early Transformational Grammar
2.2.2 The Aspects theory
2.3 The rediscovery of morphology
2.3.1 Lexicalism in syntactic theory
2.3.2 Generative Morphology comes into its own
2.4 Conclusion: Varieties of morphological theory
Acknowledgements
Chapter 3: Theoretical issues in world formation
3.1 Introduction: What is the goal of a theory of world formation?
3.2 Theory and structure
3.3 Compoments: The place of world formation in the grammar
3.3.1 The interface with syntax
3.3.2 The interface with phonology
3.4 Morphological theory and lexical semantics
3.4.1 Derivation
3.4.2 Compounding
3.5 Other theoretical issues
3.5.1 Headedness
3.5.2 Productivity and blocking
3.5.3 Affix ordering
3.5.4 Bracketing paradoxes
3.5.5 Derivational paradigms
3.6 Conclusion
Chapter 4: Theoretical issues in inflection
4.1 What is inflectional morphology?
4.2 What are the basic units in terms of which a language's inlfectional morphology is defined?
4.3 what sorts of structures does a language's inflectional morphology define?
4.3.1 Amorphousness
4.3.1.1 First reason: a word form’s morphology may underdetermine its content
4.3.1.2 Second reason: a word form’s morphology may overrepresent its content
4.3.1.3 Third reason: a word form’s morphology may misrepresent its content
4.3.2 Paradigms
4.4 What is the relation between concatenative and nonconcatenative inflectional morphology?
4.5 How is the relation between a word form's morphosyntactic properties and its inflectional exponents defined?
4.6 What distinguishes inflectional morphology from other kinds of morphology?
4.7 Conclusion: Is inflectional morphology autonomous, that is, defined separately from syntax?
4.8 Further Reading
PART II: MORPHOLOGICAL THEORIES
Chapter 5: Structuralism
5.1 Background
5.1.1 Structuralism and morphology
5.2 The architecture of the grammar
5.2.1 Where is morphology: autonomous component or not?
5.3 Basic issues
5.3.1 The representation of morphological processes
5.3.2 The basic units of morphological analysis
5.4 The sub-parts of morphology
5.4.1 The treatment of word-formation
5.4.2 The treatment of inflection
5.5 Interfaces
5.5.1 Morphology–lexicon interface
5.5.2 Morphology–phonology interface
5.5.3 Morphology–syntax interface
5.5.4 Morphology–semantics interface
5.6 Other factors
5.6.1 Variation within and across languages
5.6.2 Language change
5.6.3 Frequency and productivity
5.6.4 The role and relevance of experimental methods
5.7 Concluding remarks
5.8 Further reading
Chapter 6: Early genetative grammar
6.1 Morphology in the earliest stages of generative grammar
6.2 The introduction of the lexicon and constraints on transformations
6.3 The lexicalist hypothesis
6.4 Elaborations of the lexicalist hypithesis
6.5 Transformationalist approaches
6.6 General theoretical issues
6.7 Evaluation and legacy
Chapter 7: Later generative grammar and beyond: Lexicalism
7.1 Introduction
7.2 Historical backbackground
7.2.1 Early generativism
7.2.2 The Lexicalist Hypothesis
7.2.3 Early lexicalist models (Halle ; Jackendoff )
7.2.4 Derivation in the lexicalist framework
7.2.5 Inflection in the lexicalist framework
7.2.6 Further developments of lexicalist models
7.3 Units and concepts of lexicalist morpholgy
7.3.1 The lexicon
7.3.2 Words
7.3.3 Minimal lexical units
7.3.4 Conditions on morphological rules
7.3.5 Head
7.4 Conclusion: Lexicalism Today
Acknowledgements
Chapter 8: Distributed morphology
8.1 Introduction
8.2 Background
8.2.1 Rejecting Lexicalism
8.2.2 The model of DM
8.3 Morphology in DM
8.3.1 Morpheme-based morphology
8.3.2 Item-and-arrangement
8.3.3 Underspecification
8.3.4 Rules
8.3.5 DM as a realizational model
8.4 The interface with phonology in DM
8.5 Major morphological issues in DM
8.5.1 Derivation vs. inflection
8.5.2 Productivity
8.5.3 Blocking
8.5.4 Functional vs. lexical
8.5.5 Allomorphy
8.6 Further research
Acknowledgements
Chapter 9: Minimalism in morphological theories
9.1 Main Features of minimalist theories
9.1.1 Language as an optimal solution to the externalization of thought
9.1.2 Third factor explanations
9.1.3 Impoverishing UG
9.1.4 The lexicon in orthodox Minimalism
9.2 Reduction of CS principles: Its application to morphology
9.2.1 Deriving lexical restrictions from Merge
9.2.1.1 Deriving grammatical categories
9.2.1.2 Treating restrictions as interface conditions
9.2.1.3 Deriving Aktionsart and argument structure
9.2.2 Removing principles from CS
9.2.2.1 Feature percolation is derived from Merge
9.2.2.2 Deriving (what is left of) the No Phrase Constraint
9.3 Spell-out as a way to restrict the output of CS
9.3.1 Idiosyncratic restrictions of exponents
9.3.2 Rules of spell-out (1): linearization
9.3.3 Rules of spell-out (2): Phrasal spell-out and its consequences
9.4 Inflection: Removing agree rom CS
9.5 Features: How different are heads that get merged?
9.5.1 A feature-free CS
9.5.2 Features as part of CS
9.6 Some concluding remarks
Acknowledgements
Chapter 10: Optimality theory and prosodic morphology
10.1 Introduction
10.2 Morphological issues raised by prosodic morhology
10.3 Optimizing the 'concatenative ideal' and deviations from it
10.3.1 Constraint types in OT
10.3.2 Segmental dependence
10.3.2.1 Segmental dependence as correspondence
10.3.2.2 Segmental dependence without construction-specific correspondence
10.3.3 Fixed shape as the emergence of the unmarked
10.3.3.1 Morpheme-specific templates
10.3.3.2 Generalized Prosodic Hierarchy-based templates
10.3.3.3 Generalized morpheme-based templates
10.3.3.4 Violation of additivity: nicknames, abbreviations, and other templatic truncations
10.3.3.5 Partial reduplication in theories without Base-RED correspondence
10.3.4 Violations of proper precedence and contiguity
10.3.4.1 Semitic root-and-pattern morphology
10.3.4.2 Infixation
10.4 Evaluation
10.4.1 Contributions of OT to the analysis of prosodic morphology
10.4.2 Unresolved issues
10.5 Concluding remarks
Further Reading
Chapter 11: Morphology in lexical-functional grammar and head-driven phrase structure grammar
11.1 Background
11.1.1 Overview of LFG
11.1.2 Overview of HPSG
11.2 Basic issues
11.2.1 The representation of morphological processes
11.2.1.1 In LFG
11.2.1.2 In HPSG
11.3 The subparts of morphologh
11.3.1 Word formation
11.3.1.1 In LFG
11.3.1.2 In HPSG
11.3.2 Inflection
11.3.2.1 In LFG
11.3.2.1.1 Multiple exponence
11.3.2.1.2 Constructive morphology
11.3.2.2 In HPSG
11.3.2.2.1 Paradigms
11.3.2.2.3 Variable morph ordering
11.3.2.2.4 Floating affixes and hpsg domains
11.4 Interface with syntax
11.4.1 In LFG
11.4.2 In HPSG
11.5 Further reading and references
Acknowledgements
Chapter 12: Nature morphology
12.1 The growth of nature morphology
12.1.1 The functionalist nature of NM
12.1.2 The cognitive roots of NM
12.1.3 Naturalness at the different levels of linguistic analysis
12.2 The semiotic basis of NM
12.2.1 Iconicity and semiotic parameters of naturalness
12.2.2 Cognitive endowment and universal parameters of naturalness
12.2.3 Conflicting levels of adequacy
12.3 The strategy of NM
12.3.1 Scales of transparency
12.3.2 Morphotactic transparency and naturalness
12.3.3 The dynamic dimension in morphology and natural language change
12.4 System-dependent naturalness
12.4.1 System adequacy and markedness reduction
12.4.2 The role of paradigms in markedness reduction
12.4.3 Contrasting system adequacy and diagrammaticity
12.5 Conclusion and outlook
Acknowledgments
Chapter 13: Word and paradigm morphology
13.1 Introduction
13.1.1 The locus of morphological variation
13.1.2 Models and classification
13.2 Words and paradigms
13.2.1 Two models of grammatical description
13.2.2 In defence of WP
13.2.3 Periphrastic expression
13.2.4 Parts and wholes
13.2.5 Gestalt exponence
13.3 The 'item and pattern' model
13.3.1 Morphological organization
13.3.2 Morphological information
13.4 Concluding observations
Chapter 14: Paradigm function morphology
14.1 Background
14.2 Basic features
14.2.1 PFM1
14.2.2 PFM2
14.2.3 Deviations from canonical inflection in PFM2
14.2.3.1 Defectiveness
14.2.3.2 Syncretism
14.2.3.3 Inflection classes
14.2.3.4 Deponency
14.3 An illustrative examples of the architecture of PFM2: Bena-bena verb inflection
14.4 Beyond inflection
14.5 Interfaces
14.6 Futuer prospects
Chapter 15: Network morphology
15.1 Introduction
15.2 The network morphology framework
15.2.1 Generalized referral
15.3 Case studies
15.3.1 The normal case default and the exceptional case default
15.3.2 Morphological complexity in Nuer
15.3.3 Diachrony
15.4 Conclusion
15.5 Resources
Acknowledgements
Chapter 16: Word grammar morphology
16.1 Background
16.2 Basic issues
16.3 The subparts of morphology
16.4 Interfaces
16.5 Other properties to be accounted for
16.6 Other factors
16.7 Evaluation
16.8 Further reading
Chapter 17: Morphology in cognitive grammar
17.1 Fundamentals
17.2 Morphemes
17.3 Morphological structure
17.4 Unification
17.5 Conclusion
Chapter 18: Construction morphology
18.1 Introduction
18.2 Basic tenets
18.2.1 Sign-based
18.2.2 Word-based
18.2.3 Usage-based
18.2.4 Summing up: the basic architecture
18.3 Theoretical tools
18.3.1 Default inheritance
18.3.2 Connectivity and its functions
18.3.3 Unification
18.3.4 Non-compositionality and headedness
18.3.5 Constraints
18.4 Morphological processes
18.4.1 Word formation
18.4.1.1 Derivation
18.4.1.2 Compounding
18.4.2 Multi-word expressions
18.4.3 Inflection
18.5 Special Issues
18.5.1 Language change
18.5.2 Productivity
18.6 Evaluation
18.7 Further Reading
Acknowledgements
Chapter 19: Relational morphology in the parallel architecture
19.1 Introduction
19.2 The Place of morphology in the parallel architecture
19.3 Productive and nonproductive schemas
19.3.1 Schemas vs. rules
19.3.2 Lexical redundancy rules
19.3.3 Productivity
19.3.4 Two functions of schemas
19.3.5 Are nonproductive schemas necessary?
19.4 Formalizing lexical relations
19.4.1 Inheritance with impoverished entries and full entries
19.4.2 Inheritance without inherent directionality
19.4.3 A-morphousness
19.4.4 Sister relations
19.4.5 Sister relations among schemas
19.5 Summary and conclusions
Acknowledgements
Chapter 20: Canonical Typology
20.1 Introduction
20.2 Establishing a canon
20.2.1 Identification of the domain
20.2.2 Identification of parameters of variation
20.2.3 Identification of canonical values
20.2.4 Extrapolation of the sample space
20.3 Application to morphology
20.3.1 Inflectional morphology
20.3.2 Derivational morphology
20.4 Future directions of cononical typology
20.5 Conclusion
20.6 Further Reading
Acknowledgements
PART III: MORPHOLOGICAL THEORY AND OTHER FIELDS
Chapter 21: Morphological theory and typology
21.1 Introduction
21.2 The notion of "world" and its problems
21.2.1 Is “wordform” a typologically valid concept?
21.2.2 Inflection vs. derivation and the notion of “lexeme”
21.3 The relation between meaning and form in morphology
21.4 Syntagmatic dimensions of morphological typology
21.5 Paradigmatic dimensions of morphological typology
21.6 Conclusions
Acknowledgements
Chapter 22: Morphological theory and creole languages
22.1 Goals and main issues
22.2 Introducing creoles and creole morphology
22.2.1 The genesis of creole languages
22.2.2 The bias against creole word structure
22.3 Full reduplication
22.3.1 Survey
22.3.1.1 Meaning and iconicity
22.3.1.2 Morphological form and behaviour
22.3.2 Analysing full reduplication
22.4 Derivation
22.4.1 Survey
22.4.1.1 Superstrate affixes
22.4.1.2 Morphologized free words
22.4.1.3 Affixes of substrate origin
22.4.1.4 Semantic opacity
22.4.2 Analysing derivation
22.5 Inflection
22.5.1 Origins
22.5.2 Form–meaning relation
22.5.3 Analysing creole inflection
22.6 Conclusion
22.7 Further Reading
Acknowledgements
Chapter 23: Morphological theory and diachronic change
23.1 Introduction
23.2 Word-formation change
23.3 Reanalysis
23.3.1 Affix-telescoping
23.3.2 Resegmentation
23.3.3 Grammaticalization and affixoids
23.3.4 Implications for morphological theory
23.4 Productivity
23.4.1 Morphological change as change in productivity
23.4.2 Factors and explanations
23.5 Conclusion
Acknowledgements
Chapter 24: Morphological theory and synchronic variation
24.1 Variation: Assumptions and premises
24.2 Variation in morphological processes
24.2.1 Variation in inflection
24.2.1.1 Paradigmatic leveling
24.2.1.2 Heteroclisis
24.2.1.3 Inflectional variation and language contact
24.2.2 Variation in derivation
24.2.2.1 Producing feminine professional nouns
24.2.2.2 Derivational variation and language contact
24.2.3 Variation in compounding
24.2.3.1 Fluctuating between endocentric and exocentric compounds
24.2.3.2 Compound variation and language contact
24.3 Summary
Acknowledgements
Chapter 25: Morphological theory and first language acquisition
25.1 Introduction
25.2 Linguistic theory and Language acquisition data
25.3 Morphologically complex words: Processing and theory
25.4 Omission of regular past tense inflection in english child language
25.5 Overregularizing irregular verbs and nouns in english child language
25.6 What do other languages show?
25.7 Atypical language acquisition
25.8 Concluding remarks
Chapter 26: Morphological theory and second language acquisition
26.1 Inflectional Morphology: Why are some morphemes difficult?
26.1.1 Early studies of morpheme acquisition
26.1.2 Generative approaches to morpheme acquisition
26.1.2.1 Representational deficit approaches
26.1.2.2 The locus of the deficit revisited
26.1.3 Variation at the interfaces
26.1.4 Language tags on morphemes
26.1.5 The phonological interface
26.1.5.1 Right-edge clusters
26.1.6 Production versus comprehension
26.2 From inflectional morphology to derivational morphology
26.3 Compound words
26.4 Next steps
Chapter 27: Morphological theory and psycholinguistics
27.1 Overview
27.2 Brief introduction to psycholinguistics
27.3 Psycholinguistic approaches to morpheme representation
27.4 Relevant data and phenomena
27.4.1 Overview
27.4.2 Whole-word frequency and morpheme frequency effects
27.4.3 Morphological priming
27.4.4 Transposed-letter effect
27.4.5 Morpheme position effect
27.4.6 Summary and implications of empirical findings
27.5 Agenda for future works
27.6 Further Reading
Chapter 28: Morphological theory and Neurolinguistics
28.1 What is neurolinguistics?
28.2 Neurolinguistic approaches to morphological processing
28.2.1 Morphological processing models
28.2.2 Comprehension of morphology
28.2.2.1 Electrophysiological studies on morphological violations
28.2.2.2 Neuropsychological studies on morphological violations
28.2.2.3 Electrophysiological studies on the comprehensionof derived words
28.2.2.4 Electrophysiological studies on compound comprehension
28.2.2.5 Neuropsychological studies on compound comprehension
28.2.3 Production of morphology
28.3 Future directions
Chapter 29: Morphological theory and computational linguistics
29.1 Introduction
29.2 Theoretical background
29.2.1 The balance between rules and lexicon
29.3 Finite state automata and regular languages
29.3.1 Building an Italian word processor with FSAs
29.3.2 Finite State Transducers
29.3.3 Discussion
29.4 Hierarchical lexica
29.4.1 Discussion
29.5 Machine learning of morphology
29.5.1 Supervised learning
29.5.1.1 Memory-based learning
29.5.1.2 Stochastic modelling
29.5.1.3 Rule induction
29.5.1.4 Connectionist approaches
29.5.1.5 Discussion
29.5.2 Unsupervised learning
29.5.2.1 Minimum Description Length
29.5.2.2 Features and classes
29.5.2.3 Adaptive word coding
29.5.2.4 Discussion
29.6 Summary and concluding remarks
Chapter 30: Morphological theory and sign languages
30.1 Introduction
30.2 Exploitation of phonology in morphology
30.2.1 Sign phonology basics
30.2.2 Phonological parameters and meaning
30.2.2.1 The non-manuals
30.2.2.2 Ion-morphs
30.2.2.3 Iconicity
30.3 Theoretical issues in morphology
30.3.1 New issue: Complexity vs. simplicity
30.3.2 New issue: reactive effort
30.3.3 Familiar issues: roots and lexical categories
30.4 Morphological processes
30.4.1 Horizontal temporal morphology
30.4.1.1 Affixation
30.4.1.2 Compounding
30.4.1.3 Reduplication
30.4.2 Vertical morphology
30.4.2.1 Incorporation
30.4.2.2 Blends
30.5 Morphosyntax
30.6 Conclusion
Acknowledgements
References
Language index
Index of names
General index

Citation preview

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

   

MORPHOLOGICAL THEORY

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

OXFORD HANDBOOKS IN LINGUISTICS Recently published

THE OXFORD HANDBOOK OF AFRICAN AMERICAN LANGUAGE Edited by Sonja Lanehart

THE OXFORD HANDBOOK OF INFLECTION Edited by Matthew Baerman

THE OXFORD HANDBOOK OF HISTORICAL PHONOLOGY Edited by Patrick Honeybone and Joseph Salmons

THE OXFORD HANDBOOK OF LEXICOGRAPHY Edited by Philip Durkin

THE OXFORD HANDBOOK OF NAMES AND NAMING Edited by Carole Hough

THE OXFORD HANDBOOK OF DEVELOPMENTAL LINGUISTICS Edited by Jeffrey Lidz, William Snyder, and Joe Pater

THE OXFORD HANDBOOK OF INFORMATION STRUCTURE Edited by Caroline Féry and Shinichiro Ishihara

THE OXFORD HANDBOOK OF MODALITY AND MOOD Edited by Jan Nuyts and Johan van der Auwera

THE OXFORD HANDBOOK OF PRAGMATICS Edited by Yan Huang

THE OXFORD HANDBOOK OF UNIVERSAL GRAMMAR Edited by Ian Roberts

THE OXFORD HANDBOOK OF ERGATIVITY Edited by Jessica Coon, Diane Massam, and Lisa deMena Travis

THE OXFORD HANDBOOK OF POLYSYNTHESIS Edited by Michael Fortescue, Marianne Mithun, and Nicholas Evans

THE OXFORD HANDBOOK OF EVIDENTIALITY Edited by Alexandra Y. Aikhenvald

THE OXFORD HANDBOOK OF PERSIAN LINGUISTICS Edited by Anousha Sedighi and Pouneh Shabani-Jadidi

THE OXFORD HANDBOOK OF ELLIPSIS Edited by Jeroen van Craenenbroeck and Tanja Temmerman

THE OXFORD HANDBOOK OF LYING Edited by Jörg Meibauer

THE OXFORD HANDBOOK OF TABOO WORDS AND LANGUAGE Edited by Keith Allan

THE OXFORD HANDBOOK OF MORPHOLOGICAL THEORY Edited by Jenny Audring and Francesca Masini

For a complete list of Oxford Handbooks in Linguistics please see pp –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

   

.........................................................................................................................................

MORPHOLOGICAL THEORY ......................................................................................................................................... Edited by

JENNY AUDRING and

FRANCESCA MASINI

1

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

3

Great Clarendon Street, Oxford,  , United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © editorial matter and organization Jenny Audring and Francesca Masini  © the chapters their several authors  The moral rights of the authors have been asserted First Edition published in  Impression:  All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press  Madison Avenue, New York, NY , United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number:  ISBN –––– Printed and bound by CPI Group (UK) Ltd, Croydon,   Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

To Geert Booij

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

C

................................

Acknowledgements List of Abbreviations The Contributors . Introduction: Theory and theories in morphology

x xi xvii 

J A  F M

PART I ISSUES IN MORPHOLOGY . A short history of morphological theory



S R. A

. Theoretical issues in word formation



R L

. Theoretical issues in inflection



G S

PART II MORPHOLOGICAL THEORIES . Structuralism



T S

. Early Generative Grammar



P  H

. Later Generative Grammar and beyond: Lexicalism



F M

. Distributed Morphology



D S

. Minimalism in morphological theories



A F´

. Optimality Theory and Prosodic Morphology L J. D



OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

viii



. Morphology in Lexical-Functional Grammar and Head-driven Phrase Structure Grammar



R N  L S

. Natural Morphology



L G

. Word and Paradigm Morphology



J P. B, F A,  R M

. Paradigm Function Morphology



G S

. Network Morphology



D B

. Word Grammar Morphology



N G

. Morphology in Cognitive Grammar



R W. L

. Construction Morphology



F M  J A

. Relational Morphology in the Parallel Architecture



R J  J A

. Canonical Typology



O B

PART III MORPHOLOGICAL THEORY AND OTHER FIELDS . Morphological theory and typology



P A  M K

. Morphological theory and creole languages A R. L´ı



. Morphological theory and diachronic change



M H̈  

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



. Morphological theory and synchronic variation

ix



A R

. Morphological theory and first language acquisition



E B

. Morphological theory and second language acquisition



J A  G L

. Morphological theory and psycholinguistics



C L. G´  T L. S

. Morphological theory and neurolinguistics



N O. S  R G. V

. Morphological theory and computational linguistics



V P

. Morphological theory and sign languages



D J N

References Language Index Index of Names General Index

   

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

A

..................................................................

T book has been long in coming. Conceived and begun at a period when both of us had abundant research time, it accompanied us through a steadily increasing amount of academic duties and responsibilities. We are grateful to all our authors who have remained faithful to the endeavour. We thank the fabulous Oxford University Press staff—especially Julia Steer, Vicki Sunter, and Karen Morgan—for their help in preparing the volume. They were incredibly supportive from day one to the end. A special thank-you goes to the late John Davey, who graciously welcomed us at Oxford University Press. He was the first to believe in this project, and we are very sad that we never got to meet in person. We owe gratitude to the numerous colleagues who kindly agreed to serve as reviewers; their time and expertise was essential in ensuring the quality of the volume. We also thank Geert Booij, Ray Jackendoff, and Tom Stewart for advice on individual chapters. Heartfelt thanks to our personal angels Maurice and Yuri, for listening, encouraging, and cooking for us while we worked on the volume. Finally, we would like to thank each other, for still being good friends after completing this journey. We wish to dedicate this book to the eminent morphologist Geert Booij. Recently retired, Geert has been a beacon in the morphological community for over thirty years. To us, he has meant even more. He has inspired us through all stages of our career. He has been— and still is—a role model, a mentor, a guide, and a friend.

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

L  A ........................................................................

Different frameworks have different notational conventions, resulting in variation throughout the book. For example, ACC (accusative) can appear as Acc, acc, or . Such variation is not reflected in this list, unless it matters for the interpretation of the abbreviation. The list includes special meanings of abbreviations used in specific chapters (e.g. A used as subject of transitive verb in Chapter ). Abbreviation  H  H  A A a AAT ABL ABS ACC ADIT ADJ ADV Af(f) AGR AMR ANDAT ANT APPL ASL AUX AVM BA BD BEN BRCT C C CAT

Meaning first person dominant hand in a sign second person nondominant hand in a sign third person adjective subject of transitive verb (Chapter ) adjectivizer (Chapter )/adjective categorizer (Chapter ) Aachener Aphasie Test ablative absolutive accusative additive adjective adverb affix agreement allomorphic-morphological rule andative anterior applicative American Sign Language auxiliary attribute value matrix Brodmann area Berbice Dutch benefactive Base Reduplicant Correspondence Theory consonant (Chapters  and ) syllable coda (Chapter ) syntactic category

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

xii

  

CAUS CG Ch CI CL CLG COCA COM CON COND CONT CP CS CxG CxM D DAT DC DEF DES DET DI DIR DISC DM DP DS DTR DU DYN EEG ELAN EPP ERG ERN ERP EXCL EXHORT EZ F F-G F-S Fg FIN fMRI

causative Cognitive Grammar Chabacano Conceptual-Intentional noun class “Course in General Linguistics” (de Saussure) Corpus of Contemporary American English comitative consonant conditional connective complementizer phrase computational system Construction Grammar Construction Morphology determiner (Chapters  and ) dative class marker definite designative determiner default inheritance directional discourse structure Distributed Morphology determiner phrase D-Structure daughter dual dynamic electroencephalography Early Left Anterior Negativity Extended Projection Principle ergative Error-Related Negativity event-related potential exclusive exhortative ezafe feminine final grapheme final stress Fongbe finite functional magnetic resonance imaging

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

   FOC FRUSTR FSA FST FUT FV G/B GEN GPSG GTT h H Ha HAB HBL HPSG IA IC II IM IND INDEF INF INFRN INS IO IP IP IP IPFV IRR IS Jm K KP Kv L L LAN LatPP LF LFG LH LHS LIFG

focus frustrative Finite State Automaton Finite State Transducer future final vowel Government and Binding Theory genitive Generalized Phrase Structure Grammar Generalized Template Theory hapax legomenon Haitian (Chapter ) Hawaiʿi Creole habitual habilitive Head-Driven Phrase Structure Grammar Item-and-Arrangement inflection(al) class instantiation inheritance link metaphorical extension link indicative indefinite infinitive inferential instrumental indirect object Item-and-Process (Chapters , , and ) inflection(al) phrase (Chapters , , , ) polysemy link imperfective irrealis subpart link Jamaican Kriyol Korlai Indo-Portuguese Kabuverdianu first language second language Left Anterior Negativity Latin past participle Logical Form Lexical-Functional Grammar Lexicalist Hypothesis left-hand side left inferior frontal gyrus

xiii

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

xiv

  

LMBM LNK LOC LP LRM M MCAT MDL MDT MEG MF MHG MNI MOP MOR(PH) MPR MS MTG MUD MWE N n N N NARR ND NEG NIRS NM NOM NP NPST NUM O O OBJ OBL OED OHG OT P P-S P&P PA PASS

Lexeme-Morpheme Base Morphology linking element locative Lexical Phonology Lexical Relatedness Morphology masculine morphological category Minimum Description Length Morphological Doubling Theory magneto-encephalography Morphological Structure Middle High German Montreal Neurological Institute Morph Ordering Principle morphological structure Morphological-Phonological Rule Morphological Structure middle temporal gyrus morphology under discussion multi-word expression noun nominalizer (Chapter )/noun categorizer (Chapter ) neuter (Chapters , , and ) syllable nucleus (Chapter ) narrative Neglect Dyslexia negation Near-InfraRed Spectroscopy Natural Morphology nominative noun phrase nonpast number syllable onset (Chapter ) object (contrasting with A, Chapter ) object oblique Oxford English Dictionary Old High German Optimality Theory Portuguese (Chapter ) penultimate stress Principles & Parameters Theory Parallel Architecture passive/passive semantics (Chapter )

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

   PC PER(S) PERF PET PF PF PFM PH PHON PIE PL PNC POSS PP Pp PP PR PRAG PRED PRES PRESPART PROG PRS PRV PrWord PSC PST PTCL PTCP PTH PWd PX RDP RED RFL RHR RHS RM RR s SBCG SBJV SDSP Sel SEM

position class person perfective positron emission tomography Phonological Form (Chapters  and ) Paradigm Function (Chapters , , and ) Paradigm Function Morphology Phonological Form (Chapter ) phonology/phonological structure Proto-Indoeuropean plural Productive Non-inflectional Concatenation possessive past participle Papiamentu prepositional phrase Phonological Rule pragmatic structure predicate present present participle progressive present preverb prosodic word Paradigm-Structure Condition past particle participle Prosodic Transfer Hypothesis prosodic word possessive suffix Recoverably Deletable Predicates reduplicant (Chapter )/reduplicative semantics (Chapter ) reflexive Righthand Head Rule right-hand side Relational Morphology realization rules strong syllable Sign-Based Construction Grammar subjunctive System-Defining Structural Property selection semantics

xv

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

xvi

  

SG SLI SM SMG SML Sr SS SSR SUBJ Suppl SYN TAM tDCS TERM TETU TMA TMS TNS TRANS TSOM UBH UG UOH V v V Vce VD Vel-In VI VOC VP VT w WFGG WFR WG WP WS XCOMP

singular Specific Language Impairment sensorimotor Standard Modern Greek similative Early Sranan S-Structure stem selection rules subject suppletion syntax tense/aspect/mood transcranial Direct Current Stimulation terminative the emergence of the unmarked tense/mood/aspect transcranial magnetic stimulation tense translative Temporal Self-Organizing Map Unitary Base Hypothesis Universal Grammar Unitary Output Hypothesis verb verbalizer (Chapter )/verb categorizer (Chapter ) vowel (Chapter ) voice vowel deletion velar insertion Vocabulary Item vocative verb phrase verbal theme weak syllable “Word Formation in Generative Grammar” (Aronoff) word formation rule Word Grammar Word and Paradigm Williams syndrome complement clause

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

T C

............................................................

Farrell Ackerman is a Professor of Linguistics at UC San Diego. He has focused on periphrastic morphosyntax, A Theory of Predicates (with Gert Webelhuth) CSLI/Chicago , and linking theories, Proto-Properties and Grammatical Encoding (with John Moore) CSLI/Chicago . He is exploring Pattern-Theoretic models of grammatical organization from a Developmental Systems perspective, as in Descriptive Typology and Linguistic Theory (with Irina Nikolaeva) CSLI/Chicago , and quantitative approaches to wordbased morphology. Stephen R. Anderson is the Dorothy R. Diebold Professor emeritus of Linguistics at Yale University. His interests include most areas of general linguistics, perhaps especially morphology (where he is associated with the “A-Morphous” approach to word structure), as well as the history of linguistics, the place of human language in the biological world (including its relation to the communication systems of other animals), and the grammars of a number of languages (including Rumantsch, Georgian, Kwakw’ala, and others). John Archibald is a Professor of Linguistics at the University of Victoria where he specializes in the study of generative approaches to second language acquisition, particularly second language phonology. His recent research has focused on the interfaces of L phonology with morphology and syntax. Before moving to Victoria, he spent nineteen years at the University of Calgary in the Department of Linguistics, and the Language Research Centre. Peter Arkadiev holds a PhD in theoretical, typological and comparative linguistics from the Russian State University for the Humanities. Currently he is Senior Researcher at the Institute of Slavic Studies of the Russian Academy of Sciences and Assistant Professor at the Russian State University for the Humanities. His fields of interest include language typology and areal linguistics, morphology, case and alignment systems, tense–aspect, and Baltic and Northwest Caucasian languages. Jenny Audring is Assistant Professor at the University of Leiden. She specializes in morphology and has written on grammatical gender, linguistic complexity, Canonical Typology, and Construction Morphology (frequently in collaboration with Geert Booij). Together with Ray Jackendoff she is developing an integrated theory of linguistic representations and lexical relations. A monograph (Jackendoff and Audring The Texture of the Lexicon) is forthcoming with Oxford University Press. James P. Blevins is Reader in Morphology and Syntax at Cambridge University and Fellow in Linguistics at Homerton College. His primary research interests concern the structure,

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

xviii

 

learning, and processing of complex inflectional and grammatical systems. He has published on a range of syntactic and morphological topics, including a recent monograph on Word and Paradigm Morphology (Oxford University Press, ). Elma Blom is Professor at the Department of Special Education at Utrecht University, where she teaches about language development. Her research and publications are about language impairment, multilingual development, and the relationship between language and cognition in both impaired and multilingual children, with a special focus on grammatical development. Besides theoretical issues, she works on the improvement of diagnostic instruments for multilingual children. Oliver Bond is Senior Lecturer in Linguistics in the Surrey Morphology Group, University of Surrey. His research interests include theoretical morphology and syntax (especially agreement and case), multivariate approaches to typology, and language documentation and description, particularly in the languages of Africa and the Himalayas. He is a co-editor of Archi: Complexities of Agreement in Cross Theoretical Perspective (with Greville G. Corbett, Marina Chumakina, and Dunstan Brown; Oxford University Press, ). Dunstan Brown is Professor of Linguistics and Head of the Department of Language and Linguistic Science at the University of York. His research interests include autonomous morphology, morphology–syntax interaction, and typology. Much of his work focuses on understanding morphological complexity through computational modelling. His recent publications include Understanding and Measuring Morphological Complexity (edited with Matthew Baerman and Greville Corbett; Oxford University Press, ), and Morphological Complexity (with Matthew Baerman and Greville Corbett; Cambridge University Press, ). Laura J. Downing is Professor for African Languages at the University of Gothenburg, Sweden. Her research specialty is the prosody of (mainly) Bantu languages, including topics such as tone, prosodic morphology, the syntax–phonology interface, and information structure. She is the author of numerous articles on these topics, as well as the monographs Canonical Forms in Prosodic Morphology and (with Al Mtenje) The Phonology of Chichewa. Antonio Fábregas (PhD in Linguistics, Universidad Autónoma de Madrid, ) is currently Full Professor of Hispanic linguistics at the University of Tromsø. His work deals with what he takes to be the internal syntactic structure of words, including its implications for semantics and phonology. He is the author of three monographs and more than one hundred papers and currently is Associate Editor of the Oxford Research Encyclopedia of Morphology. Livio Gaeta (PhD , University of Rome ) is Full Professor for German Language and Linguistics at the Department of Humanistic Studies of the University of Turin. He held earlier tenured positions in Turin (–), Rome  (–) and Naples “Federico II” (–). He is a Fellow of the Alexander von Humboldt Foundation at the Humboldt University of Berlin (). His main interests include morphology, language change and grammaticalization, cognitive linguistics, language contact, and minority languages.

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

 

xix

Christina L. Gagné (PhD , University of Illinois at Urbana-Champaign) is currently a Professor at the University of Alberta, Canada. The aim of her research is to understand how conceptual knowledge affects the way people use and process language. In particular, her work focuses on the underlying conceptual structures that are involved in the interpretation of novel phrases and compounds. Her past work has shown that knowledge about the relations that are used to combine concepts plays an important role in the creation and comprehension of novel noun phrases as well as in the comprehension of compound words. Nikolas Gisborne is Professor of Linguistics at the University of Edinburgh. His main interests are in the lexicon and the lexicon–syntax interface and language change. He is the author of The Event Structure of Perception Verbs (Oxford University Press, ) and, with Andrew Hippisley, the editor of Defaults in Morphological Theory (Oxford University Press, ). Pius ten Hacken is a Professor at the Institut für Translationswissenschaft of the LeopoldFranzens-Universität Innsbruck. His research interests include morphology, terminology, lexicography, and the philosophy and history of linguistics. He is the author of Defining Morphology (Olms, ) and of Chomskyan Linguistics and its Competitors (Equinox, ), the editor of The Semantics of Compounding (Cambridge University Press, ), and co-editor of The Semantics of Word Formation and Lexicalization (Edinburgh University Press, ) and Word Formation and Transparency in Medical English (Cambridge Scholars Press, ). Matthias Hüning has been a full Professor of Dutch Linguistics at Freie Universität Berlin since . He received his PhD from Leiden University in . His research focuses on comparative/contrastive linguistics and on the structure and the status of Dutch in relation to other (Germanic) languages. The main emphasis of his work is on word-formation from a diachronic perspective. Ray Jackendoff is Seth Merrin Professor Emeritus of Philosophy at Tufts University. He has worked on semantics, syntax, morphology, the evolution of language, music cognition, social cognition, and consciousness. Among his books are Semantics and Cognition, Foundations of Language, Simpler Syntax (with Peter Culicover), and A User’s Guide to Thought and Meaning; in press is The Texture of the Lexicon (with Jenny Audring). He has been President of both the Linguistic Society of America and the Society for Philosophy and Psychology, and was recipient of the  Jean Nicod Prize and the  David Rumelhart Prize. Marian Klamer is Professor of Austronesian and Papuan Linguistics at Leiden University. Her main research interest lies in describing and analyzing underdocumented Austronesian and Papuan languages in Eastern Indonesia. Klamer has published (sketch) grammars of two Austronesian languages (Kambera, ; Alorese, ) and two Papuan languages (Teiwa, ; Kaera, ), several thematic volumes and over fifty articles on a wide range of topics, including morphology, typology, language contact, and historical reconstruction of languages in Indonesia.

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

xx

 

Ronald W. Langacker is retired from the position of Professor of Linguistics at the University of California, San Diego. For over four decades, his research has aimed at a unified account of language structure. The resulting descriptive framework, known as Cognitive Grammar, claims that grammar is inherently meaningful. Based on an independently justified conceptualist semantics, it is argued that lexicon, morphology, and syntax form a continuum consisting solely in assemblies of symbolic structures (form–meaning pairings). Gary Libben is Professor of Applied Linguistics at Brock University. His research focuses on lexical representation and processing across languages and the development of psycholinguistic methodologies for studying language processing across age groups, language groups, and situational contexts. He co-edits the journal The Mental Lexicon and he is Director of the Words in the World SSHRC Partnership Project. He was Founding Director of the Centre for Comparative Psycholinguistics at the University of Alberta. Rochelle Lieber is Professor of Linguistics at the University of New Hampshire. Her interests include morphological theory, especially derivation and compounding, lexical semantics, and the morphology–syntax interface. She is the author of many articles and several books on morphological theory, including most recently English Nouns: The Ecology of Nominalization (Cambridge University Press, ). She is the Editor-in-Chief of the Oxford Encyclopedia of Morphology. Ana R. Luís is Assistant Professor of the English Department at the University of Coimbra and Senior Researcher at the Linguistics Research Center CELGA-ILTEC. Her research interests include cliticization, inflection, autonomous morphology, language contact, and creole morphology. She is co-Editor-in-Chief (with I. Plag and O. Bonami) of the journal Morphology (Springer), co-author of Clitics (with Andrew Spencer, Cambridge University Press), co-editor of The Morphome Debate (with R. Bermúdez-Otero, Oxford University Press), and editor of Rethinking Creole Morphology (special issue of the journal Word Structure, Edinburgh University Press). Robert Malouf is a Professor of Linguistics in the Department of Linguistics and Asian/ Middle Eastern Languages at San Diego State University. His research focus is on computational approaches to morphosyntax, and in particular word-based models of inflection. Prior to joining SDSU in , he was a member of the humanities computing department at the University of Groningen. He has a PhD in linguistics from Stanford University. Francesca Masini is Associate Professor of Linguistics at the University of Bologna. Her research and publications revolve around semantics, morphology, and the lexicon, with a focus on multiword expressions, word classes, lexical typology, and the lexicon–syntax interface. She works primarily within Construction Grammar and Construction Morphology. She is currently Associate Editor of The Oxford Encyclopedia of Morphology. Fabio Montermini is a Senior Researcher (directeur de recherche) at the CLLE-ERSS research unit of the CNRS. He also teaches morphology at the Université de Toulouse Jean Jaurès (France). He is the author of several publications on morphology, both

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

 

xxi

inflectional and derivational. His research interests include morphophonological and semantic aspects of various languages, including Italian, French, other Romance languages, and Russian. Donna Jo Napoli is Professor of Linguistics at Swarthmore College. She investigates all components of sign language grammars, particularly ASL, and of spoken language grammars, particularly Italian. She is a member of a team that advocates for the language rights of deaf children. She is part of the project RISE (Reading Involves Shared Experience), which produces bimodal-bilingual ebooks for parents to share with their deaf children. Rachel Nordlinger is Professor of Linguistics at the University of Melbourne and a Chief Investigator in the ARC Centre of Excellence for the Dynamics of Language. Her research centres around the description and documentation of Australia’s Indigenous languages, especially Bilinarra, Wambaya, and Murrinhpatha. She has also published widely on syntactic and morphological theory (especially LFG), and in particular the challenges posed by the complex grammatical structures of Australian languages. Vito Pirrelli (PhD ) is Research Director at the CNR Institute for Computational Linguistics “Antonio Zampolli” in Pisa, and head of the “Physiology of Communication” laboratory. Co-editor in chief of the journal Lingue e Linguaggio, and former Chair of NetWordS (the European Science Foundation Research Networking Programme on Word Structure), his main research interests include computer models of the mental lexicon, psycho-computational models of morphology acquisition and processing, memory and serial cognition, theoretical morphology, language disorders, and language teaching. Angela Ralli is Professor of General Linguistics, Director of the Laboratory of Modern Greek Dialects of the University of Patras and member of the Academia Europaea. Her expertise area is theoretical morphology, contact morphology, and dialectal variation. She has published five books and  peer-reviewed articles, has edited seventeen collective volumes and has presented her work in many international conferences and universities. She has been awarded the Canadian Faculty Enrichment Award (), the Stanley Seeger Research Fellowship (Princeton, ), and the VLAC Research Fellowship (Flemish Royal Academy, –, ). Louisa Sadler is Professor of Linguistics at the University of Essex, UK. Her current research interests centre on constraint-based syntactic theory (especially LFG), particularly in relation to the interfaces to morphology and semantics, and the grammatical description of the Arabic vernaculars, including Maltese. Niels O. Schiller is Professor of Psycho- and Neurolinguistics at Leiden University. His research interests include experimental linguistics, psycho- and neurolinguistics, including multilingualism. His main interest is lexical access and form encoding (morphological, phonological, and phonetic encoding) in speech production. He employs behavioral, electrophysiological, and neuroimaging methods to answer his research questions. He has published more than  peer-reviewed articles and book chapters on a broad variety

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

xxii

 

of topics in experimental linguistics. Together with Greig de Zubicaray he is the editor of the forthcoming Oxford Handbook of Neurolinguistics. Daniel Siddiqi is Associate Professor of Linguistics, Cognitive Science, and English at Carleton University in Ottawa. His research has primarily focused on metatheoretical concerns in Distributed Morphology since he graduated from the University of Arizona in . His other research interests include English morphology, stem allomorphy, productivity, and word processing. He is an editor of the Routledge Handbook of Syntax, Benjamins’ Morphological Metatheory, and the forthcoming Routledge Handbook of North American Languages. Professor Thomas L. Spalding (PhD , University of Illinois at Urbana-Champaign) has taught at the University of Iowa and the University of Western Ontario and is currently a Professor in the Department of Psychology at the University of Alberta. He has also been Chief Research Scientist for Acumen Research Group. His research interests relate to the issue of how people combine information in the course of learning, comprehension, and inference. This overarching interest has led to research on concepts, conceptual combination, and compound word processing, as well as peripheral interests in spatial cognition, conceptual development, and consumer loyalty. Thomas Stewart is Assistant Professor of Linguistics in the Department of Comparative Humanities at the University of Louisville. His research into non-concatenative morphological phenomena, especially the treatment of initial consonant mutations in Scottish Gaelic, has fed projects in Celtic linguistics, morphological theory, and contact linguistics (transfer and attrition). His book Contemporary Morphological Theories: A User’s Guide was published by Edinburgh University Press. Gregory Stump is Emeritus Professor of Linguistics at the University of Kentucky. His research includes work on the structure of complex inflectional systems, the nature of inflectional complexity, and the algebra of morphotactics. His research monographs include Inflectional Morphology: A Theory of Paradigm Structure (), Morphological Typology: From Word to Paradigm (, co-authored with Raphael A. Finkel), and Inflectional Paradigms: Content and Form at the Syntax–Morphology Interface (). He is a Fellow of the Linguistic Society of America and is a co-editor of the journal Word Structure. Rinus G. Verdonschot (PhD ) is Assistant Professor at the Graduate School of Biomedical and Health Sciences at Hiroshima University. His research includes published work on a wide range of psycho- and neurolinguistic topics (e.g. speech production, orthographic script processing, multilingualism) as well as on action-perception coupling in professional musicians. He is also a co-author of the widely used E-Primer: An Introduction to Creating Psychological Experiments in E-Prime textbook.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

 Theory and theories in morphology ......................................................................................................................

    

. W

.................................................................................................................................. M, the grammar of words, has proved a rich and fertile ground for theoretical research. As a result, we are faced with a bewilderingly complex landscape of morphological terms, concepts, hypotheses, models, and frameworks. Within this plurality, linguists of different persuasions have often remained ignorant of each other’s work. Formalist and functionalist theories have run on mutually isolated tracks; theoretical approaches have not connected to insights from typology, psycholinguistics, and other fields—and vice versa. The research community is divided about basic matters, such as the central units of morphological description or the nature of morphological features and processes. Moreover, the proliferation of theories goes hand in hand with an increasing internal diversification, sometimes to the point where foundational principles slip out of sight. This volume hopes to contribute to a greater unity in the field by providing a comprehensive and systematic exposition of morphological theory and theories. We have aimed to make it a helpful resource for those working within a specific framework and looking for a critical and up-to-date account of other models, as well as a comprehensive guide for those wishing to acquaint themselves with theoretical work in morphology, perhaps coming from other domains in linguistics or from related fields such as computer science or psychology. The book is intended to be informative and inspiring, and a lasting contribution to the field. We also hope that—in times of increasing scepticism towards theory, in morphology as in other areas of linguistics—it will serve to showcase the richness and value of theoretical thinking and modelling, and will encourage new advances in theoretical work.

.. About the volume This volume stands in the long tradition of Oxford Handbooks in Linguistics and complements other recent volumes, in particular The Oxford Handbook of Inflection (Baerman ),

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

The Oxford Handbook of Derivational Morphology (Lieber and Štekauer ), The Oxford Handbook of Compounding (Lieber and Štekauer b), and The Oxford Handbook of the Word (Taylor ). It is kin to The Oxford Handbook of Linguistic Analysis (Heine and Narrog, second edition ) by focusing on linguistic approaches more than on linguistic facts, although a wide variety of data is addressed. The closest relative to the present volume is Stewart () on contemporary morphological theories. However, our book is an edited volume rather than a monograph, and the scholars working in the various frameworks are speaking in their own voice. In addition to the eminent contributors expected in a volume of this kind, many of our authors are upand-coming linguists with a fresh look on classic and novel issues. While the field is too diverse for a reference work to be exhaustive, we have attempted to cover a representative range of theories and have made a point of including very recent models, such as Canonical Typology, Construction Morphology, and Relational Morphology. Moreover, Part III of the volume connects morphological theory with various linguistic subfields, identifying the broader challenges and opening the dialogue where it is often lacking.

. M 

.................................................................................................................................. Despite the evident, and often drastic, differences between theoretical approaches, the theories in this volume are united in the questions they seek to answer. This section briefly reviews a selection of time-honoured issues that have shaped the theoretical landscape over the years and that reappear in different guises in basically every theory.

.. What is the goal of morphology theory? Morphology is the grammar of words. This includes the form and structure of words, their meaning, the relations between words, and the ways new (complex) words are formed. Depending on one’s views of what a theory of grammar should accomplish, the goal of morphological theory is either to account for all existing words or for all potential words of a language. As Aronoff famously stated in  (–): ‘the simplest task of a morphology, the least we demand of it, is the enumeration of the class of possible words of a language’. Whether this goal has been attained by any of the theories on the market, or can be attained at all, is a matter of debate, since the working area of morphological theory is not easily delimited. For one thing, the word is notoriously hard to define (Haspelmath , see also Arkadiev and Klamer, Chapter  this volume). Moreover, the field of morphology runs into other linguistic subfields, with fluid boundaries and shared responsibilities.

.. Where is morphology? Morphology is famously called ‘the Poland of linguistics’ (Spencer and Zwicky : ), surrounded by neighbouring fields eager to claim the territory for themselves. Many theories, some of them represented in this volume, model the structure and behaviour of words in syntax and/or in phonology (e.g. Distributed Morphology, see Siddiqi, Chapter 

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

:      Phrasal phonology

Phrasal syntax

Phrasal semantics

Morphophonology

Morphosyntax

Morphosemantics



 .. Types of linguistic structure and the place of morphology

this volume, or Optimality Theory, see Downing, Chapter  this volume). The countermovement is gathered under the term of lexicalism (Montermini, Chapter  this volume) and the motto ‘morphology by itself ’ (Aronoff ), arguing that morphology needs to be recognized as a module, layer, or level of description of its own because it has unique, irreducible properties. Lexicalist approaches ask questions such as the following: • What properties are unique to morphology? • How does morphology interface with other types of linguistic structure? The issue of interfaces, of course, only arises if morphology is granted its own identity, distinct from other areas of grammar. However, views on interfacing differ greatly depending on whether morphology is understood in a broad sense or a narrow sense. In a broad sense, morphology spans the entire bottom row of Figure . (adapted from Jackendoff and Audring, Chapter  this volume). This row is the domain of the word. Morphology then contrasts and interfaces with the upper row, syntax, the phrasal domain (cf. §..). The horizontal arrows within the bottom row—connecting morphosyntax, morphophonology, and morphosemantics—represent morphology-internal links, since a word contains all these types of information. However, morphology can also be understood in a narrower sense. Words carry sound and meaning. In addition, they may have a third level of structure, which Figure . calls ‘morphosyntax’, marked in bold. This level of structure houses all properties that cannot be subsumed under phonology or lexical semantics.1 This includes grammatical features, such as case, gender, or tense, as well as properties such as inflectional class, the heartland of ‘morphology by itself ’. In some theoretical models, this layer also encodes the building blocks of words: roots, stems, and affixes. Morphology, as understood in this narrower sense, contrasts and interfaces with word phonology and word meaning. Many controversies in morphological theory follow from explicit or implicit disagreements about the nature and place of morphology in the grammar. While most theories accept morphology in the broader sense, as the part of language that handles words, some deny the existence of a dedicated layer of morphological structure in the narrower sense (e.g. Cognitive Grammar, see Langacker, Chapter  this volume). Additional complications arise from the various conceptions of morphological processes. Theories differ in whether they assume different rules for the grammar of words and the Note that the term ‘morphosyntax’ is used differently here than it is used in the typological literature, where it denotes morphological structure relevant to syntax (e.g. in agreement). 1

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

grammar of phrases. Also, a division between morphological rules on the one hand and the input/output of such rules on the other can lead theories to posit a morphology–lexicon interface. This contrasts with theories that place morphology in the (equivalent of the) lexicon, for example Word Grammar (Gisborne, Chapter  this volume), Construction Morphology (Masini and Audring, Chapter  this volume), and Relational Morphology (Jackendoff and Audring, Chapter  this volume).

.. Basic units and processes What are the units that morphological theory handles? Again, we see widespread and fierce disagreement. Two prominent camps have arisen around the word-based and the morpheme-based views, arguing for the word and the morpheme, respectively, as the basic unit of morphological structure. The debate is often framed in principled terms (see e.g. Anderson, Chapter  this volume, or Stump, Chapter  this volume), but sometimes invokes more specific concerns, such as which entity comes closest to a stable and transparent : relation between form and meaning (see e.g. Langacker and Gaeta, Chapters  and  this volume, respectively). A complicating factor is the notorious difficulty to define either the word or the morpheme in a consistent and cross-linguistically applicable way. However, in view of the controversy surrounding the morpheme in particular, it is worth noting that the term is used widely and freely in descriptive linguistics as well as in psycho- and neurolinguistics, where it is found to be of value (see e.g. Schiller and Verdonschot, Chapter  this volume). The chapters in the present volume show surprisingly little debate about the lexeme, which is a central unit in a variety of influential theories (e.g. Stump, Chapter  this volume). This notion is related to the difference between inflection and derivation, which itself is not easy to draw. While most theories make a point of distinguishing inflection and derivation/word-formation—some clearly specializing in one or the other—the nature of the difference is disputed, especially as to whether it is gradual or categorial (sometimes intermediate distinctions are made, such as between inherent and contextual inflection, Booij ). The issues scale up to the difference between morphology and syntax, and more generally between the grammar and the lexicon, since inflection is generally believed to be more relevant to syntax and on the whole ‘more grammatical’ than derivation. Within word-formation, certain types of compounds and lexicalized multi-word units further blur the boundaries between morphological and syntactic structures (see Arkadiev and Klamer, Chapter  this volume). A further basic difference between frameworks is how they conceive of the relation between the units of morphological analysis and the processes that handle them. While units and processes are tightly wedded in many theories, with rules for specific affixes or individual feature structures, in others they are clearly separated. An example for the latter type is Minimalism (Fábregas, Chapter  this volume), some variants of which rely on a single general operation, Merge. Other differences between theories are found in the way classes, features, and other properties are encoded. Some theories also seek to encode relations, from syntagmatic relations such as valency or agreement to paradigmatic relations such as those found in inflectional morphology.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

:     



.. Morphology and syntax Theories of morphology can be differentiated by the way they model the relation between morphology and syntax. Does the grammar of words involve its own module, with rules and representations distinct from the rules and representations of phrasal grammar? All extremes can be found: from assuming no difference at all (e.g. in Distributed Morphology, see Siddiqi, Chapter  this volume) to a strictly modular view in which morphology is encapsulated from syntax (e.g. in LFG/HPSG, see Nordlinger and Sadler, Chapter  this volume). For theories such as Construction Morphology (Masini and Audring, Chapter  this volume) or Relational Morphology (Jackendoff and Audring, Chapter  this volume), the difference lies not in the processes—morphological versus syntactic rules—but in the categories: morphology has stems and affixes, while syntax does not, and syntax has phrasal categories such as NPs and VPs, while morphology does not. For those theories that do assume a split between morphology and syntax, the question arises how the two components interface. An often-cited assumption is that X0, the syntactic word, serves as the interface. This view runs into difficulties with complex words containing phrases, as in do-it-yourselfer (the No Phrase Constraint is discussed in Montermini and Fábregas, Chapters  and  this volume, respectively). Other related points of debate, recurring in many theories throughout the book (see Lieber, Chapter  this volume, for an overview), are lexical integrity—the (in)ability of syntax to look into or manipulate word structure—and the issue of headedness, disputing the equivalence of syntactic and morphological heads.

.. Morphology and semantics Another important issue in morphological theory is the relation between meaning and form. The canonical mapping is captured in the terms isomorphy, biuniqueness, transparency, compositionality, diagrammaticity (Gaeta, Chapter  this volume), or ‘the concatenative ideal’ (Downing, Chapter  this volume): each piece of meaning should correspond uniquely to a piece of form, and added meaning should go hand in hand with added form. A lot of what makes morphological theory interesting and hard has to do with divergences from this ideal. The issue is pertinent to the divide between word-basedness and morpheme-basedness. Are there privileged units in which the relation between form and function is clearest or maximally stable? And if so, is the word or the morpheme a better candidate? Violations of biuniqueness come in many guises. Well-studied phenomena are polysemy, homophony, and syncretism (cases of one form with several meanings), allomorphy, periphrasis, multiple or extended exponence (cases of one meaning expressed by several alternative or combined forms), plus a range of specifically paradigmatic mismatches, such as suppletion, overabundance, heteroclisis, and deponency (Stump, Chapters  and ; see also Arkadiev and Klamer, Chapter , and Ralli, Chapter , all in this volume). In addition, complex words can display semantic non-compositionality, with unpredictable meanings showing up in individual words or as subregularities in clusters of words. While many theories set such quirks aside as lexicalizations, others make a point of including them, for example Construction Morphology (Masini and Audring, Chapter  this volume).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

.. Morphology and phonology The interplay of morphology and phonology is another much-debated issue. Many theories in the generative tradition (e.g. Minimalism and Distributed Morphology, see Siddiqi, Chapter , and Fábregas, Chapter ) model phonology as a spell-out component at the end of a syntactic derivational chain. This means that phonological information cannot play a role in the morphological operations themselves. Other theories (e.g. LFG and HPSG, see Nordlinger and Sadler, Chapter  this volume) argue that all information, including phonology, has to be available at the same time. The most-researched interface phenomenon between morphology and phonology, however, is allomorphy, and almost every theory has something to say about it. The most pressing question with regard to allomorphy is whether variants of stems or affixes are computed from some underlying form or are listed and selected from memory. This brings us to the final major issue: the relation between morphology and the lexicon.

.. Morphology and the lexicon Morphology is a part of grammar, and many theories make a principled distinction between the grammar and the lexicon. However, morphology is the grammar of words, and words live in the lexicon. This means that we have to ask whether morphology happens in the lexicon or whether the lexicon and the morphology are different domains, connected via an interface. Terminology is muddled here, and we often find different understandings of the same term, or different terms for the same notion. For example, Distributed Morphology has a vocabulary, which corresponds to the lexicon in other theories. Earlier generative theories distinguish a lexicon of morphemes and a dictionary of words (see ten Hacken, Chapter  this volume). The distinction between lexicon and grammar is intimately related to the division of labour between storage and computation. This issue is especially pertinent to the chapters in Part III of this volume that discuss morphology in first and second language acquisition (Blom, Chapter , and Archibald and Libben, Chapter ), in psycho- and neurolinguistics (Gagné and Spalding, Chapter , and Schiller and Verdonschot, Chapter ), and in computational modelling (Pirrelli, Chapter ). However, it is also relevant to morphological theory itself, which has to decide on the format of lexical representations and on the kinds of items assumed to be in the mental lexicon. Again, this is an area where word-based and morpheme-based theories clash. While the former expect the smallest entries in the lexicon to be word-sized (Blevins, Ackerman, and Malouf, Chapter  this volume, and Gisborne, Chapter  this volume), the latter posit entries for morphemes or even smaller structures (Siddiqi, Chapter  this volume). The crux is the modelling of regularly inflected word forms. Such forms are predictable enough to be handled by grammar, yet some degree of listed knowledge is necessary to choose the right form among alternatives, for example if the language has inflectional classes (Blevins, Ackerman, and Malouf, Chapter  this volume). Generally, models differ in the degree to which they embrace or reject redundancy in areas that can be handled both by lexical storage and by grammatical computation. Last but not least, a major and problematic issue is productivity, the capacity to generate new complex forms with a particular structure. In contrast to syntax, where full productivity

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

:     



is commonly seen as the norm, morphology—especially derivational morphology—is rampant with semi-productive or unproductive patterns (see Hüning, Chapter  this volume). An important challenge for morphological theories lies in the modelling of such limited productivity. Theories that emphasize the generative capacity of the system commonly evoke constraints or filters that block non-existing forms (see e.g. Chapters  and  by Downing and by Siddiqi, respectively); others argue for built-in limitations in the system itself (Jackendoff and Audring, Chapter  this volume). A considerable degree of agreement is found in the modelling of blocking, where a well-formed but non-existing complex word (say, stealer) is impeded by an existing form with the same meaning (thief). Almost all theories that have something to say about blocking invoke a principle by which the specific properties of the listed form block the application of a more general rule.

.. Taxonomies of theories In the linguistic literature we find various attempts to classify morphological theories. Two of them are repeatedly cited in the present volume. The earliest is Hockett’s () ‘Two Models of Grammatical Description’, which distinguishes Item-and-Process from Itemand-Arrangement types of theories. Blevins, Ackerman, and Malouf (Chapter  this volume) explain the difference between the two morphemic models, contrasting them with their own Word-and-Paradigm approach. The second classification is Stump’s () well-known distinction of lexical versus inferential and incremental versus realizational theories, giving us a four-way taxonomy, which is laid out in Stump’s chapter ‘Theoretical issues in inflection’ (Chapter  this volume). Various theories presented in Part II of the volume explicitly position themselves on this grid. A very recent classification is proposed in Stewart’s () book, which sorts morphological theories along each of five axes, explicitly incorporating one of Stump’s classifications: • • • • •

morpheme-based vs. word-/lexeme-based formalist vs. functionalist in-grammar vs. in-lexicon phonological formalism vs. syntactic formalism incremental vs. realizational.

As some of the theories discussed by Stewart converge with those in the present volume, the reader is encouraged to consult the monograph for details. Finally, Blevins’ () distinction of constructive versus abstractive models is helpful due to its more nuanced take on the theoretical treatment of sub-word structures. The abstractive view, in particular, permits for a combination of word-basedness and wordinternal structure, which might be an opportunity for consensus. Generally, it should be kept in mind that theoretical frameworks can have different goals and rest on different foundational assumptions. While one theory emphasizes descriptive coverage or psychological plausibility, others stress computational implementability and/or architectural parsimony, that is, shorter descriptions and minimal machinery. Among the theories that seek parsimony, we find those that strive to minimize storage (these are clearly

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

in the majority) and those that attempt to minimize computation. Such basic decisions have deep repercussions on the architecture of the model and on the items and processes assumed. Finally, it should be noted that not all models presented in this book are bona fide theories of morphology. Some are in fact theories of syntax (e.g. Minimalism), and one (OT, see Downing, Chapter  this volume) is mainly a theory of phonology. However, each chapter illustrates the perspectives on morphology taken by these theories.

. T    

..................................................................................................................................

.. Part I: Issues in morphology Part I of the volume sets the scene. It starts with a brief foray into the history of morphology with a focus on North America (Anderson, Chapter ). However, the journey begins in Switzerland, with the brothers de Saussure, Ferdinand and René, and their disagreement on the internal structure of words. While René saw complex words as concatenations of simple signs, later called morphemes, Ferdinand regarded the full word as the basic sign. To him, morphological structure emerged from inter-word relations. The morpheme-based view was perpetuated by Bloomfield (), who differentiated between a lexicon of primitives, on the one hand, and the rules of grammar, on the other. Full words came back into view with Matthews’ (), Aronoff ’s (), and Anderson’s () work, which reinstated the paradigmatic, relational perspective and found its most radical expression in Anderson’s ‘amorphousness’ hypothesis, propagating morphology without morphemes. In addition to sketching the swing of the historical pendulum between word-based and morpheme-based models, the chapter shows the influence of Boas, Sapir, Harris, Chomsky, and Halle on the emergence of morphology as an independent domain in theoretical linguistics, and introduces some of the fundamental debates that have shaped the theoretical landscape in the following decades. The next two chapters identify the central theoretical issues within the two morphological domains: word-formation and inflection. Lieber’s contribution (Chapter ) on derivation and compounding also starts with a major historical divide, namely Item-andArrangement versus Item-and-Process types of theories (Hockett ). While the former makes morphology similar to syntax in assuming a hierarchical structure of minimal meaningful units, the latter emphasizes the importance of rules in deriving, or realizing, complex words. Here, morphology offers a variety of challenges. Do the rules of morphology have the same format as the rules of syntax? Can realizational rules, popular in modern theories of inflection, be fruitfully applied to derivation? The chapter continues with a discussion of interface issues between morphology and syntax, morphology and phonology, and morphology and semantics. It concludes with a number of hot topics such as headedness, productivity, blocking, affix ordering, bracketing paradoxes, and derivational paradigms. In Chapter , Stump discusses theoretical issues in inflection. He singles out a number of fundamental points of disagreement between theories of morphology. These are: (a) what

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

:     



counts as the basic unit of morphological analysis; (b) what are the structures that belong to inflection; (c) the relation between concatenative and non-concatenative morphology; (d) the relation between function and form; and (e) the difference between inflection and other types of morphology. After outlining the issues, the chapter takes a position on each of them. As the general perspective of the chapter is inferential-realizational, Stump argues for paradigms and against morphemes, for rules of exponence and implicative rules, and for a unified treatment of concatenative and non-concatenative morphology. Morphology is argued to have its own domain in the grammar, distinct from but interfacing with syntax.

.. Part II: Morphological theories Part II consists of concise but thorough accounts of the main theoretical approaches to morphology, both formalist and functionalist/cognitive, developed during the twentieth and early twenty-first centuries. Some chapters discuss clusters or families of models, but most are dedicated to one specific approach. The first three chapters provide an overview of three clusters of theories: those commonly subsumed under the label Structuralism (Chapter ), the transformational theories of early Generative Grammar (Chapter ), and the lexicalist models of later Generative Grammar (Chapter ). Stewart (Chapter ) identifies Structuralism as a formative period in the history of linguistics. It brought a re-evaluation of the theoretical and descriptive machinery inherited from antiquity and established linguistics as an autonomous scientific discipline. The central characteristic of the movement was the understanding that each language constitutes a system in itself which should be investigated empirically based on the distributional patterns of forms. This involved overcoming the focus on Indo-European, on culturally privileged languages, and on diachrony. The result was a flourishing of scholarly work on both sides of the Atlantic, with—among many others—de Saussure, Hjelmslev, Jakobson, Trubetzkoy, and Vachek in Europe and Boas, Whorf, Sapir, Bloomfield, Harris, Hockett, and Nida in North America. Important issues for morphology were the place of morphology in the architecture of the grammar, the identification and representation of morphological units and processes, and the interaction of morphology with other linguistic domains. The s to s saw the rise of Generative Grammar. Ten Hacken (Chapter ) discusses three seminal publications from this period, Chomsky (), Lees (), and Chomsky (), and—more briefly—two later publications, Halle () and Jackendoff (), which are the focus of Chapter . The central innovations in early Generative Grammar were rewrite rules, including transformational rules, that promised to make complex grammatical structures computable. While mainly devised for syntax, the model was also applied to morphological structure. A lexicon was added to account for idiosyncratic properties of words, marking the beginning of the debate between storage and computation, still very much alive today (see Chapters –). Other major issues of the time were the incorporation of constraints into the generative model and the place and role of semantics. The history of Generative Grammar is continued by Montermini with Chapter  on the development of Lexicalism. The hallmark of lexicalist theories is the assumption that word-internal phenomena are situated in a distinct module, independent of syntax and phonology. For many theories, this included the belief that the grammar of words is not

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

only separated, but also substantially different from the grammar of phrases. Montermini discusses two foundational publications, Halle () and Jackendoff (), which can be seen as the first lexicalist models, although diverging fundamentally in their assumptions about the interplay of grammar and lexicon and the nature of the lexicon itself. The lexicalist spirit continued through Aronoff ’s work on derivation and Anderson’s work on inflection, the latter stressing not only the division between morphology and syntax, but also the need to distinguish between inflection and derivation. The surge of lexicalist work from the s onwards established morphology as a phenomenon ‘by itself ’ and a selfrespecting field of linguistic inquiry. Chapters – describe models of a ‘formalist’ orientation. The direct inheritors of Chomskyan Generative Grammar are Distributed Morphology and Minimalism, while Optimality Theory and LFG/HPSG constitute radically different models. Distributed Morphology (Siddiqi, Chapter ) represents a countermovement to the Lexicalism described in Chapter : it is a theory of syntax that extends into the word by manipulating morphemes. The chapter motivates the general outlook as well as the more specific choice for a lexical-realizational, morpheme-based, Item-and-Arrangement type of model and outlines its various incarnations, depending on the syntactic theory of the time. Some variants distinguish a separate level of Morphological Structure, later abandoned. Complex words and phrases are built in two steps: the grammar constructs a complete derivation, which is then instantiated by Vocabulary Items and spelled out phonologically. This architecture makes a number of classic morphological issues—among others productivity, blocking, and allomorphy—appear in a different light, as is elaborated in the chapter. Another theory that is actually a family of syntactic models is Minimalism. Fábregas explains the Minimalist views on morphology in Chapter . The name of the framework advertises its emphasis on a minimal grammatical component, as most constraints on language are seen as located either in Universal Grammar or in language-external systems, especially the Conceptual-Intentional (CI) and the sensorimotor (SM) system, or in the variable experience of individual speakers and learners. In its most minimal form, computations are done by a single operation, Merge. The chapter explains how the theory models lexical restrictions, grammatical categories, Aktionsart, and argument structure and discusses the rules of spell-out and the role of features. Chapter  by Downing illustrates how Optimality Theory addresses the issues of prosodic morphology, specifically the non-concatenative phenomena known as reduplication, truncation, root-and-pattern morphology, and infixation. The model employs three types of constraints—faithfulness, markedness, and alignment—to determine the optimal form of a word or phrase. Constraint evaluation is demonstrated on a wide variety of languages, among others SiSwati, Japanese, Modern Hebrew, Samoan, Diyari, and Nupe. Important theoretical issues are (a) whether constraints can be stated generally or are specific to a certain morphological operation, construction, or morpheme, and (b) whether restrictions (e.g. on the size of the optimal nickname or the location of the optimal infix) follow from other properties, such as the stress type or syllable structure of the language. Chapter  presents two distinct but related theories, Lexical-Functional Grammar (LFG) and Head-Driven Phrase Structure Grammar (HPSG). Nordlinger and Sadler briefly explain the architecture and the formalism of the two models, highlighting their strong lexicalist commitment, which states that word-internal structure is invisible to syntax. This

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

:     



perspective implies that both theories are compatible with a variety of morphological models, as long as the lexicalist stance is maintained. In LFG, the emphasis is on the way different formal structures across languages can map onto the same functional structure. Some variants of HPSG are similar to construction-based theories (cf. Masini and Audring, Chapter  this volume) by modelling derivational rules as lexical items, while inflection is often understood as being realizational. The chapter discusses a variety of phenomena, from case stacking to paradigms, stem space, and floating affixes, in a number of typologically diverse languages. Both theories are fully formalized and implementable in computational models. In Chapter , Gaeta sketches Natural Morphology, a framework that strives to explain why morphological systems are the way they are and develop in the way they do. At the heart of the theory lies the notion of ‘naturalness’, understood as “cognitively simple, easily accessible (esp. to children), elementary and therefore universally preferred” (Dressler : ). Naturalness manifests itself in preferences rather than laws. Such preferences can be in conflict with each other and with other preferences—both typological and system-specific—resulting in cross-linguistic diversity. The chapter introduces the naturalness parameters (i) diagrammaticity (transparency); (ii) biuniqueness (uniform coding); (iii) indexicality (proximity); (iv) binarity; and (v) optimal word shape and exemplifies how they bear on productivity, paradigm structure, and language change. Chapters – form a loose cluster of allied models of the Word-and-Paradigm type. The general outlook is described succinctly in the contribution by Blevins, Ackerman, and Malouf, Chapter . A major cornerstone is the focus on paradigmatic relations among words, which other models tend to neglect in favour of word-internal syntagmatics. Paradigmatic relations can take the form of inflectional paradigms or classes, but they are implicated whenever a word, or a cluster of words, is predictive of another. The Wordand-Paradigm (perhaps better called Item-and-Pattern) approach involves a broadly inclusive view on the size and granularity of morphosyntactic items, as it is “defined less by the units it recognizes than by the relations it establishes between units” (§.). That said, the word might be a privileged unit, both in its stability of form and function and the mapping between them, and in the degree to which it predicts other words. Formalizations of Word-and-Paradigm models use the mathematics of information theory to calculate the entropy of a given paradigm cell and the reduction of uncertainty effected by another cell or cluster of cells. The chapter closes with the unique perspective on learnability and crosslinguistic variation invited by the information-theoretic perspective. In Chapter , Stump presents his influential Paradigm Function Morphology, an inferential-realizational theory, which means that it rejects the listing of morphemes and the accumulation of properties by stringing morphemes together. Instead, the model assumes a Paradigm Function that operates on stems and cells of inflectional paradigms to induce the realization of each cell, that is, the phonological form of the fully inflected word. The model employs an explicit and rigorous formalism based on property sets and functions. The chapter lays out an earlier and a later variant of the theory and illustrates the basic functions. As the theory emphasizes its inclusive coverage, the second half of the chapter is devoted to non-canonical inflectional morphology, as manifested in defectiveness, syncretism, inflection classes, and deponency. The chapter closes with a brief look at derivation and the various interfaces between morphology and other domains.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Network Morphology, outlined by Brown in Chapter , has much in common with PFM—centrally the inferential-realizational orientation—but differs in its architecture. As its name suggests, the model assumes an inheritance network containing lexemes and generalizations over their properties. Its aim is to model inflectional systems by developing the most parsimonious network that contains all information necessary for inferring the correct form for each inflected word. This means determining the right level for every generalization (e.g. about patterns of syncretism or stem allomorphy) and ordering properties such as number or case in such a way that queries about a particular inflected form are guided to the place in the network where the answer is encoded. The model is formalized and computationally implemented with the help of the DATR notation (Evans and Gazdar ). The chapter explains central notions such as default inheritance, underspecification, and generalized referral and shows the application of the model in a number of case studies, including a diachronic case. Word Grammar, discussed by Gisborne in Chapter , shares many traits of realizational models like PFM and is network-based like Network Morphology, but differs radically in the entities it models. In line with the cognitive orientation of the theory, nodes in a Word Grammar network encode linguistic knowledge directly and declaratively, requiring no procedures or algorithms. The network encodes three types of information: linguistic structure of various kinds (the nodes), the relations between nodes, and certain attributes that specify the relations (e.g. realization, base, variant, or part). Inflected and derived forms are represented in full. Morphemic structure is encoded indirectly via relations between forms that share parts. Generalizations, including those normally expressed as features, are captured by means of default inheritance. The chapter also discusses the difference between inflection and derivation, the interfaces between morphology and the lexicon and morphology and syntax, and comments on phenomena like productivity and syncretism. Word Grammar forms a bridge to the more cognitively oriented models in Chapters –. The first and most venerable is Cognitive Grammar by Langacker (Chapter ). Including this theory in the volume might seem surprising, as it only recognizes two types of structure—semantic and phonological—and excludes morphological structure. Yet, the model allows for the expression of morphological units and patterns, both in individual words and as generalized constructional schemas. The perspective is explicitly usage-based: any unit of structure is abstracted from production or perception events and entrenched through recurrent use. Larger structures appear as composites if their parts correspond to (parts of) other structures. Stems can be distinguished from affixes in that affixes are dependent items that need other structures to be manifested. However, analysability of complex items is a matter of degree and can change over time. The theory provides a unified account of language structure, within which morphology is not highly differentiated, but seamlessly integrated. Construction Morphology (Masini and Audring, Chapter ) is the morphological theory within the framework of Construction Grammar. It shares a number of properties with Cognitive Grammar, especially its usage-basedness and the notion of constructional schemas. However, it assumes morphological structure as an independent layer of information. The central unit of analysis is the construction, intended as a sign, a form–meaning pairing. Constructions can be fully specified, in which case they correspond to words, or

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

:     



they can be partly or fully schematic. Schematic or semi-schematic constructions are the counterpart of rules in more procedural models, since they serve as templates for the creation of new words. All constructions are situated in a network which combines the lexicon and the grammar into a continuous and highly structured environment. As the same basic architecture is assumed for morphological and syntactic constructions, the model has a specific affinity with in-between phenomena such as multi-word units. The newest theory in the volume is Relational Morphology (Jackendoff and Audring, Chapter ), an account of morphology set in the framework of the Parallel Architecture (Jackendoff ). The model is a sister theory of Construction Morphology, but differs by virtue of its radical focus on lexical relations, its inclusion of non-symbolic structures, and its formalism. Special theoretical attention is paid to unproductive patterns, which are regarded as more basic: productive patterns are patterns ‘gone viral’. Like all constructionbased theories, but more explicitly so, the model is a theory both of morphology and of the rich internal structure of the lexicon. Moreover, it aspires to a graceful integration of morphology within a general and cognitively plausible model of language, and of language within other areas of cognition. The survey of theories concludes with Canonical Typology (Bond, Chapter ), which is special in not being a theory in the usual sense, but providing a methodological framework for a typologically informed understanding of linguistic phenomena and a better comparability of theoretical terms and concepts. Most of the work in Canonical Typology is on morphology and morphosyntax, especially inflection, with the closest ties to inferentialrealizational models like PFM (Chapter ). The method consists in the identification of a canonical core for a phenomenon and the possibility space of less canonical variants around it. Both the core and the possibility space are defined logically; establishing the actual population of the space by real-life examples is an independent, later step. The chapter outlines the method in detail and provides a wealth of references on the canonical approach as applied to a wide variety of phenomena.

.. Part III: Morphological theory and other fields Part III of the volume is devoted to the interdisciplinary dimension. It presents observations and insights from other linguistic fields relevant for morphological theory, namely language typology (including creole languages), dialectal and sociolectal variation, diachrony, first and second language acquisition, psycholinguistics, neurolinguistics, computational linguistics, and sign languages. The chapters in this part do not discuss what the different theories of morphology have to say about the various fields (this should emerge— where relevant—from Part II), but illustrate how each field informs and challenges morphological theory. Chapter  by Arkadiev and Klamer on morphological theory and language typology discusses the challenges that languages around the world pose for common theoretical concepts and terminology. This is especially true for morphology, since it is the domain where languages differ most, which stands in the way of cross-linguistic generalizations. Richly illustrated with examples and supported by a wealth of references, the chapter shows the difficulties associated with the notion of the word, the distinction between inflection and

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

derivation, the deviations from biuniqueness in form–meaning mapping, the ordering of affixes, and various paradigmatic phenomena such as inflectional classes and morphomic allomorphy in stems and affixes. The authors conclude by arguing for greater collaborative efforts among typologists, theoreticians, and descriptive linguists in order to arrive at theoretically informed descriptions, dictionaries, and corpora, on the one hand, and typologically informed theories, on the other. In Chapter , Luís carries on the typological theme with a survey of the morphology in creole languages. Creoles are often neglected in theoretical morphology, as their morphological systems are said to be poorly developed. The chapter refutes this assumption, showing the interesting diversity of morphological, especially derivational, patterns found in creole languages. These include affixes from both the superstrate and the substrate language, as well as novel morphological formatives, which gives interesting insights into the genesis of affixal morphology. While inflectional systems in creoles are indeed often simpler, languages do show complexities such as portmanteau morphemes, extended exponence, syncretism, allomorphy (including morphomic stem allomorphy), and inflectional classes. The chapter demonstrates that creole morphology is as interesting to analyse formally and discuss theoretically as is the morphology of non-creole languages. The issue of diachronic change, pertinent to the creole languages discussed in Chapter , is addressed more broadly in Chapter  by Hüning. The chapter focuses on word-formation and discusses three major types of change: (a) the rise of new wordformation patterns by way of reanalysis, for example ‘affix telescoping’ or resegmentation; (b) the development of new affixes from lexical words through grammaticalization; and (c) the increase or decrease of productivity. Productivity proves especially problematic, being hard to establish synchronically, but even harder to assess diachronically. A general problem is the gradience that the ever-changing nature of language imposes on all entities, properties, and behaviour, making them difficult to capture in fixed theoretical categories and terms. The chapter closes with a plea for interdisciplinary, data-driven research, and a usage-based approach that is better suited to the emergent nature of language. Variation from a synchronic perspective—with some additional discussion of pathways of change to complement Chapter —is addressed by Ralli in Chapter . The chapter identifies certain recurrent types of morphological variation in inflection, derivation, and compounding, with illustrative analyses of modern Greek dialect data. For inflection, patterns of special interest are overabundance, heteroclisis, and allomorphic variation in paradigms. In derivation, we find affix synonymy and affix competition. In the realm of compounding, the Greek data show puzzling doublets of left-headed (and exocentric) and right-headed (and endocentric) compounds with the same meaning. For all three domains of morphology the chapter stresses the importance of language contact as a trigger of variation and change, and as an explanatory factor in view of the often surprising variational patterns. The volume continues with four loosely connected chapters on morphological theory and first and second language acquisition, psycholinguistics, and neurolinguistics. All four chapters share a common fundamental theme: the division of labour between storage and computation in complex words. In Chapter , Blom outlines how data from first language acquisition can inform morphological theory. A central topic is the ‘past tense debate’ inquiring whether irregularly and regularly inflected English verbs are treated differently in processing, with full-form

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

:     



lookup for the former and computation from their parts for the latter.2 While the evidence is not conclusive, analyses of child language indicate a gradual acquisition curve with frequency effects both in acquisition order and in overgeneralization patterns, which suggests that lexical storage also matters for regularly inflected words. Results from language development in children with Specific Language Impairment or Williams syndrome, by contrast, do support a difference between the regular and irregular words. To date, the past tense debate remains unresolved. Deeper understanding can only be expected if individual and crosslinguistic variation is considered, as well as the interplay of morphology, phonology, and syntax and wider cognitive factors. Archibald and Libben in Chapter  move the spotlight of attention to morphological theory and second language acquisition. The issues in this field are partly the same as in first language acquisition. What do error patterns tell us about linguistic knowledge? Which deficits are betrayed by a particular error? What makes certain structures difficult to acquire? Additional questions in second language acquisition concern the influence of the L, the typical cognitive strategies of adult learners, and the representation of the bilingual lexicon and grammar in the brain. An important insight, also mentioned in Chapter , is that morphological errors need not represent morphological deficits. Instead, they may be caused by incorrect mappings of morphological knowledge to other aspects of linguistic competence, for example phonology. The chapter also problematizes the question of what constitutes morphological ability and presses the point that morphological knowledge cannot be investigated in isolation from other kinds of knowledge. In addition, scientific results are highly task- and methodology-dependent and may differ markedly for production and comprehension. Chapter  by Gagné and Spalding broadens the view from language acquisition to psycholinguistics in general, focusing on the key question for morphology: the representation and processing of complex words in the mental lexicon. The central debate is whether complex words are stored in full or computed from their parts, or indeed both—in succession or in parallel. The chapter reviews a wide variety of psycholinguistic research from different experimental paradigms and concludes that there is strong overall evidence for the involvement of sub-word units in the processing of multi-morphemic words. However, the effects differ depending on frequency, on semantic transparency, and on whether the complex word is inflected, derived, or a compound. Sub-word units may have a facilitatory or inhibitory effect depending, again, on frequency and on the time window in the processing event. The chapter closes with an agenda for future work, emphasizing the need for a closer integration of experimental and theoretical morphology. The fourth chapter in the cluster, Chapter , is Schiller and Verdonschot’s contribution on morphological theory and neurolinguistics. Neurolinguistics differs from psycholinguistics primarily in its methods: most of the evidence cited in the chapter comes from brain imaging The terms ‘single route’ and ‘dual route’ are used in this connection; these terms also appear in Chapters  and . However, the reader should be aware that they are not always used in the same sense. Dual route is often associated with different processing mechanisms for different types of word (e.g. in Blom, Chapter , and Schiller and Verdonschot, Chapter ). However, the term can also mean different processing strategies for the same type of word (e.g. in Gagné and Spalding, Chapter ). Evidence in favour of parallel lookup and computation for various types of complex word would support a dual route theory in the latter sense, but not in the former. 2

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

studies using ERP or fMRI. Again, the main issue is the role of sub-word structure in the processing of complex words. The chapter provides a broad and detailed overview of recent research on language comprehension, that is, parsing, and language production, the lessstudied perspective. Evidence from healthy speakers is discussed as well as studies on individuals with aphasia or other language disorders. The chapter presents a variety of experimental paradigms, from priming and grammatical violation experiments to lexical decision tasks and picture naming. Drawing especially on compound processing, the chapter argues for an important role of morphemic constituents, indicating morphological decomposition in both comprehension and production. The volume continues with Pirrelli (Chapter ) on morphological theory, computational linguistics, and word processing. The chapter reviews computational models of language processing such as finite state automata and finite state transducers, hierarchical lexica, artificial neural networks, and dynamic memories. Illustrations are given with the help of Italian verbal paradigms. A substantial part of the chapter is devoted to machine learning, both supervised and unsupervised. Each section concludes with a critical assessment of theoretical issues, pointing out ties to individual theoretical frameworks or to problem areas such as the interplay of storage and computation, the nature of representations, the encoding of general versus specific information, and notions such as entropy and economy. The chapter argues for an inclusive modelling of lexical and grammatical knowledge and highlights the mutual interdependence of word structure and word processing. The final Chapter  by Napoli broadens the view from spoken language to sign language. The particular affordances and restrictions of sign languages pose considerable challenges to morphological theory. For example, signs can be uttered in parallel, adding a vertical structural dimension not found in speech. Moreover, sign phonology, in particular non-manual parameters, can be meaningful, which obscures the boundary between phonology and morphology. Other special properties can be attributed to the relative youth of sign languages, which limits the amount of grammaticalized morphology. Established theoretical notions are often hard to apply to sign, for example in identifying roots and affixes or distinguishing lexical categories. Compounding and affixation are notoriously hard to tell apart. On the other hand, there are morphological entities unique to sign, such as ion morphs: partially complete morphemes that need to be accompanied by a particular phonological parameter to yield a full lexical meaning. The chapter offers a broad overview of the issues and a wealth of references.

. C

.................................................................................................................................. In conclusion, we hope that this handbook will serve as a guide through the jungle of theories in today’s linguistic morphology, and the phenomena they seek to account for. At the same time, we intend the volume to be helpful in fostering the dialogue among sub-disciplines that is much needed for a graceful integration of linguistic thinking. We hope that the book will be inspiring and useful to graduate students in linguistics as well as to scholars of various disciplines, from morphologists wishing to acquaint themselves with neighbouring or competing models to specialists from other subfields of linguistics.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ........................................................................................................................

ISSUES IN MORPHOLOGY ........................................................................................................................

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

                ......................................................................................................................

 . 

I in the nature of language has included attention to the nature and structure of words—what we call Morphology—at least since the studies of the ancient Indian, Greek, and Arab grammarians, and so any history of the subject that attempted to cover its entire scope could hardly be a short one. Nonetheless, any history has to start somewhere, and in tracing the views most relevant to the state of morphological theory today, we can usefully start with the views of de Saussure. No, not that de Saussure, not the generally acknowledged progenitor of modern linguistics, Ferdinand de Saussure. Instead, his brother René, a mathematician, who was a major figure in the early twentieth-century Esperanto movement (Joseph ). Most of his written work was on topics in mathematics and physics, and on Esperanto, but de Saussure () is a short (-page) book devoted to word structure,1 in which he lays out a view of morphology that anticipates one side of a major theoretical opposition that we will follow in this chapter. René de Saussure begins by distinguishing simple words, on the one hand, and compounds (e.g. French porte-plume ‘pen-holder’) and derived words (e.g. French violoniste ‘violinist’), on the other. For the purposes of analysis, there are only two sorts of words: root words (e.g. French homme ‘man’) and affixes (e.g. French ‑iste in violoniste). But “[a]u point de vue logique, il n’y a pas de difference entre un radical et un affixe . . . [o]n peut donc considérer les affixes comme des mots simple, et les mots dérivés au moyen d’affixes, comme de véritables mots composés. Il n’y a plus alors que de deux sortes de mots: les mots simples (radicaux, préfixes, suffixes) et les mots composés par combinaison de mots 1

I am indebted to Prof. Louis de Saussure of the University of Neuchâtel for access to a copy of this work, found in the library of his father, the late Antoine de Saussure (son of Ferdinand’s brother LouisOctave de Saussure). This item appears to have gone unnoticed by linguists of the time or of ours, although it contains many ideas that would later come to prominence: for instance, the notion of hierarchical inheritance as the basis of semantic networks, and a clear statement of the Right-hand Head Rule that would later be formulated by Williams (). An edition of this work and a subsequent continuation (de Saussure ) has recently appeared (Anderson and de Saussure ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

simples”2 (de Saussure : f.). The simple words are then treated as ‘atoms’, each with an invariant sense and potentially variable content, and the remainder of the work is devoted to the principles by which these atoms are combined into ‘molecules’, each a hierarchically organized concatenation of the basic atoms. While the notion of the linguistic sign as an arbitrary and indissoluble unit combining form and meaning would be associated as an innovation with his brother Ferdinand, René here lays out a picture of word structure as a matter of structured combination of basic signs, units corresponding to what would later be called morphemes. His principal interest is in providing an analysis of the content of these basic elements from which it is possible to derive the meanings resulting from their combination, but this is grounded in a picture of complex words as essentially syntactic combinations of units that cannot be further decomposed. The word violoniste is thus composed of two equally basic units, both nouns: violon ‘violin’ and ‑iste ‘person, whose profession or habitual occupation is characterized by the root to which this element is attached’. This may seem rather straightforward, and indeed much subsequent thinking about morphology would take such a position as virtually self-evident, but it can be contrasted with the view of complex words taken by René’s brother Ferdinand. Rather than treating all formational elements found in words as equally basic units, and complex or derived words as combinations of these, de Saussure ( []) distinguishes basic or minimal signs from relatively or partially motivated signs. Thus, arbre ‘tree’ and poirier ‘pear tree’ are both signs. The former is not further analyzable, and thus basic, but in the case of the latter, the form and content link it to other pairs such as cerise ‘cherry’, cerisier ‘cherry tree’; pomme ‘apple’, pommier ‘apple tree’, etc. It is the parallel relation between the members of these pairs, conceived by Ferdinand in terms of analogical proportions, that supports (or partially motivates) the meaning of poirier in relation to that of poire—and not the presence of a structural unit ‑ier ‘tree, whose product is characterized by the root to which this element is attached’. Ferdinand de Saussure was clearly familiar with the equivalent in various languages of the German word morfem ‘morpheme’ which appeared in the earlier work of Jan Baudouin de Courtenay (Anderson b) in much the same sense as René de Saussure’s ‘simple words’ or ‘atoms’. He does not use it, however, and does not present the analysis of complex derived words as a matter of decomposing them into basic units or minimal signs. Rather, he treats morphological structure as grounded in the relations between classes of words: similarities of form reflecting similarities of content and vice versa directly. The two brothers were no doubt familiar to some extent with one another’s views, and Ferdinand (de Saussure  []: ff.), without reference to René’s work, notes the existence of a difference between two approaches to word structure, one based on interword relations and the other on the identification of internal components of words. He suggests some possible arguments for the latter, but by and large adopts the former, relational picture. René (de Saussure : ) later summarizes and oversimplifies Ferdinand’s discussion, concluding (incorrectly, it would seem) that this supports his

2

From the point of view of logic, there is no difference between a root and an affix. Affixes can thus be considered as simple words, and affixally derived words as really compound words. There are therefore only two sorts of word: simple words (roots, prefixes, and suffixes) and compound words made by combining simple words.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



view, although there is no evidence that Ferdinand was persuaded to adopt René’s mode of analysis as opposed to merely seeing that there were issues to be resolved here. Much subsequent writing would identify the notion of the morpheme as it later emerged with the minimal sign of de Saussure ( []), and thus assume that “Saussure’s” treatment of morphologically complex words involved breaking them down into components of this sort, but such an analysis is really most appropriate for the work of René de Saussure, and not for Ferdinand. In the work of the two, we can already discern a difference between something in René’s work corresponding to what Stump () would later call a lexical theory and something in Ferdinand’s that could be called an inferential one. Such a basic dichotomy of morphological theory is not the only anticipation of later distinctions that we can find in work of the early twentieth century. In both the practice of the grammars that appear in the Handbook of American Indian Languages and its introduction (Boas , see also Anderson : ch. ), Franz Boas maintains a theory of morphology with some distinctly modern features. In particular, the treatments of morphological structure in these descriptions are divided into two parts: on the one hand, an inventory of the grammatical processes employed in the language (e.g. prefixation, suffixation, internal modification such as Ablaut, etc.) and, on the other, an inventory of the ideas expressed by grammatical processes, such as number, tense or aspect, causativity, etc. In practice, this division was deployed in much the same way as under the Separation Hypothesis of Beard (), according to which a language’s morphology consists of a collection of possible formal modifications any of which can be used to express any of the categories of content signaled by the form of a word.

. A  G M

.................................................................................................................................. However interesting it may be to find such precursors of contemporary issues in views more than a century old, there was not in fact very much continuity between those discussions and the American Structuralist3 linguistics of the late s and beyond that served as the direct ancestor of the picture of word structure that emerged in generative theories. After a brief look at the history of the period, we turn to those views.

.. Edward Sapir A major figure in our field whose work has connections to that of Boas is Edward Sapir. Most of Sapir’s writing was focused on descriptive problems, primarily in the native 3 The linguists who saw their approach to language as originating in the work of Leonard Bloomfield referred to it as “Descriptivist” linguistics; “American Structuralism” was a term introduced by their detractors. Nonetheless, it has become the standard label, and we follow that usage here. The limitation of the discussion below to American work is grounded not in the lack of anything of interest in other parts of the world, but rather in the fact that ideas about morphology in later generative work can be derived almost exclusively from American antecedents.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

languages of North America. Apart from this, Sapir is best known for his focus on language as a component of the mind, including some important papers on phonology and the psychological nature of phonological structure (Anderson : ch. ), issues that do not directly concern us here. Apart from what one might conclude from his descriptive practice, however, Sapir did provide a view of morphology in the context of a typology of morphological structure across languages that appears in his little general audience book Language (Sapir ; cf. also Anderson ). Sapir’s intention is to improve on the kind of typology inherited from nineteenthcentury philology that differentiated languages as ‘analytic’ or ‘isolating’ vs. ‘synthetic’, and among synthetic languages as ‘agglutinative’, ‘polysynthetic’, and ‘inflecting’ or ‘fusional’— categories that are rather imprecise and difficult to apply in a consistent way across a language. Sapir substitutes a descriptive framework based on three dimensions. One of these refers to the degree of internal complexity of words, corresponding to the traditional scale running from ‘analytic’ through ‘polysynthetic’; he has little to say about this, and it will be ignored here. Rather more interesting are his other two dimensions. These derive fairly directly from Boas’ early form of the Separation Hypothesis noted above: The question of form in language presents itself under two aspects. We may either consider the formal methods employed by a language, its “grammatical processes,” or we may ascertain the distribution of concepts with reference to formal expression. What are the formal patterns of the language? And what types of concepts make up the content of these formal patterns? The two points of view are quite distinct. (Sapir : )

As possible grammatical processes that might serve as the signal for concepts, Sapir offers a list: • • • • • •

word order; composition (compounding of stems); affixation (prefixation, suffixation, infixation, etc.); internal modification (vocalic or consonantal Ablaut, Umlaut, consonant mutation, etc.); reduplication; variations in accent (pitch, stress, etc.).

It will be seen that in addition to signalling content by the combination of minimal signlike units (“morphemes”), Sapir envisions a variety of ways in which systematic modification of the shape of a base can be used to indicate additional conceptual material. As a typology, the granularity of this classification is clearly much finer than that of whole languages, and it should rather be thought of as a way to characterize individual morphologically significant relations. As a theory of morphology, this is fairly clearly an inferential rather than a lexical one, in the terms of Stump (). Sapir’s other significant dimension, that of concepts, distinguishes four basic types: • • • •

basic (radical) concepts; derivational concepts; concrete relational concepts; pure relational concepts.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



The last two are essentially the sorts of thing that are generally ascribed to inflection, and the difference between them corresponds to the distinction between inflectionally significant properties that also bear semantic content (e.g. grammatical number) and those with purely grammatical significance, such as the use of the Latin Nominative to mark subjects. Every language has to have basic and pure relational concepts (even if the latter are signalled only by word order, and not by formal modification), but languages can differ as to the degree of elaboration of the other two types. Rather interestingly, Sapir takes the grammatical form of a sentence to be described in a way that attends to the distribution of relational concepts without reference to its actual morphological or semantic content. Thus, after comparing the two sentences The farmer kills the duckling and The man takes the chick, he observes that we feel instinctively, without the slightest attempt at conscious analysis, that the two sentences fit precisely the same pattern, differing only in their material trappings. In other words, they express identical relational concepts in an identical manner. (Sapir : )

The relevant grammatical patterns, then, are to be described independently of the concrete lexical (and morphological) material that will instantiate them. As far as the actual apparatus of syntactic description, Sapir’s view is no more fleshed out than others of the period prior to the focus on syntax associated with the rise of Generative Grammar, but it does not seem unfair to characterize it as a precursor to subsequent Late Insertion theories such as those of Otero (), Anderson (), Halle and Marantz (), and others. Had Sapir been as engaged by morphology as he was by phonology, the typological framework he developed in Sapir () suggests that the theoretical position he would have arrived at would have been quite interesting in contemporary terms. However, there is little evidence of a continuing interest in these issues in his later work. In any event, his conception of language as an aspect of the mind put him rather at odds with the emerging positivist climate of the s and s; while linguists continued to recognize his importance, he had little influence on the views that came to define Structural Linguistics.

.. Leonard Bloomfield Much more central to the field of Linguistics as it developed an identity independent of its origins in Classical Philology and in Anthropology were the proposals of Leonard Bloomfield.4 Although his textbook (Bloomfield ) bears the same title as Sapir’s earlier work, the views of the subject matter expressed in the two works were vastly different. As opposed to earlier linguists whose conception of language was shaped in large part by traditional grammar of the European sort, Bloomfield was strongly influenced by his study of the Sanskrit grammarians. Where traditional grammar largely saw morphological relations as grounded in paradigmatic relations among word forms, the Sanskrit view emphasized 4

Matthews () shows that Bloomfield’s actual views on morphology, and the history of those views, should be understood as significantly more complex and nuanced than the version in which they were understood by the “neo-Bloomfieldians” who developed the theoretical position to be outlined below in §... The presentation here is intended to represent the picture of morphology that was attributed to Bloomfield and that served as the basis for later theorizing.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

breaking complex words down as combinations of basic parts, and Bloomfield pursued this program of analyzing words as structured arrangements of irreducible morphemes. Bloomfield defines the morpheme as “a linguistic form which bears no partial phoneticsemantic resemblance to any other form” (Bloomfield : ), that is, a meaningful component of a word that cannot be analyzed into smaller meaningful sub-parts. This requires that phonetic and semantic resemblances among words are correlated, and the ‘morphemes’ are the units that result when further sub-division would destroy that correlation. This includes not only roots, but also affixes of all kinds; and the lexicon of a language is precisely an inventory of all of the morphemes of all kinds that can be identified. The grammar of the language is largely the set of principles by which these morphemes are arranged into larger constructions. Bloomfield’s definition leads to a variety of problems, including the proper analysis of “phonæsthemes” and some of the phenomena referred to in the literature on ideophones (Kwon and Round ). For example, the set of English words including glow, gleam, glisten, glitter, glimmer, glare, etc. share phonetic material (the initial gl‑) and semantics (‘light emitting from a fixed source’), but the notion that these facts motivate positing a morpheme {gl} with that meaning seems wrong to virtually all linguists. Similarly, pairs of words in Korean such as piŋkɨl/pεŋkɨl ‘twirling of a larger/smaller object’, pipi/pεpε ‘state of bigger/smaller things being entwined’, cilcil/calcal ‘dragging of a heavier/lighter object’, etc. differ systematically in that the difference between high and non-high vowels in the word corresponds to a difference between relatively larger and smaller referents. Again, treating the vowel height dimension as a separable morpheme would be broadly rejected, despite the presence in such cases of a “partial phonetic-semantic resemblance” among the words. It would appear that Ferdinand de Saussure’s notion of word structure, grounded directly in relations among forms rather than in their decomposition, could describe such phenomena without particular problems, but phonæsthemes in particular pose a serious issue for the position Bloomfield wished to maintain. In fact, going back to his earlier work (as noted by Matthews ), he enumerates a number of such examples in English which he characterizes (Bloomfield : ) as constituting “a system of rootforming morphemes, of vague signification”, though he does not elaborate on the problems they pose for his definition of the morpheme. Bloomfield’s morphology was a fairly pure example of a lexical theory; indeed, he introduced the term “lexicon” to designate the inventory of a language’s morphemes, the analysis of which constituted the basic task in grammatical analysis. His definition, however, appeared to present a number of technical difficulties, and the attempts of his followers in the Structuralist tradition to identify and deal with these led to slightly different theories within the same general range.5

.. Classical American structuralism One problem that Bloomfield’s definition of the morpheme appeared to present was that it relied on an association of specific phonetic material with specific semantics, thereby assuming that morphemes have determinate phonetic content. In many cases, though, we wish to say that we have to do with a single morpheme even when multiple distinct 5

See Stewart, Chapter  this volume, for a general discussion of structuralist morphology.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



phonetic forms are involved. For example, surely there is a single plural morpheme to be found in the English words cats, dogs, and horses, but not only are the surface forms [s], [z], and [ɨz] distinct, they are also phonemically distinct in terms of the analyses of sound structure prevalent at the time. Now in fact Bloomfield’s practice quite freely allows such identification: he treats duke and duchess as sharing a single morpheme with two alternants, but his definition does not make clear the basis for such a description. This was addressed directly, initially by Harris (), who proposed to consider the sound side of the morpheme not as a single phonetic (or phonemic) shape, but rather as a set of such shapes, each associated with the same semantics and in a relation of complementary distribution such that (ignoring the issue of free variation) each member of the set is associated with particular environments, where the environments linked to any two members of the set are disjoint. The individual phonological forms are then referred to as the distinct allomorphs of the morpheme. The notion that morphemes are realized by members of a set of allomorphs standing in complementary distribution is obviously quite similar to that of phonemes as realized by phonetic segments that constitute the set of their allophones, also in complementary distribution. Indeed, the American Structuralists of the s and s saw the discovery and analysis of phonemes, minimal contrastive units of sound structure, as probably their most important contribution to the science of language, and plunged with alacrity into a view of morphemes as entirely parallel minimal units of word (and ultimately sentence) structure. The view of the morpheme as an analytic unit entirely like the phoneme, but constituting the next level up of abstraction, was developed in various papers throughout the period by linguists such as Bloch () and Nida (), reaching its most explicit formulation in the definitive work of Harris (). Within traditional grammar (and to some extent in the linguistics of the early years of the twentieth century), the theory of word structure, morphology, was primarily a matter of characterizing the relations among surface words, especially word forms that together constituted a paradigm. In contrast, the American Structuralist theory of morphology was centrally a theory of the morpheme, and that in turn broke down into two aspects: allomorphy, or the characterization of the relations in form among the allomorphs of individual morphemes; and morphotactics, or the characterization of the combinatory principles that group morphemes together into larger units. The resulting view involves a commitment to several basic principles: • Morphemes are indivisible units of linguistic form linking some component(s) of meaning with a set of mutually exclusive allomorphs that express it, similar in nature to Saussurean minimal signs. • Each morpheme has a determinate semantic content, and each allomorph has a determinate phonological form. • Words are composed exhaustively of morphemes. • Each morpheme in a word is represented by one and only one allomorph; and each allomorph represents one and only one morpheme. Once such principles are made explicit, it becomes clear that many situations in actual languages are not directly analyzed in such terms. Hockett (; cf. also Anderson b) discusses a number of these, including discontinuous morphological expression, zero

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

morphs and their counterparts empty morphs, portmanteaux, replacive and subtractive morphs, etc. Somewhat puzzlingly, Hockett seems to treat the identification of these anomalies as constituting their resolution, but in any event, an agenda of potential problems for the structuralist conception of morphological structure had been laid out.

. M   G G

.................................................................................................................................. With the rise of Generative Grammar6 in the s and s came a rather precipitous decline of interest in morphological issues per se. This was largely a consequence of the absorption of much of the territory of morphology into other aspects of grammar. As phonological theory discarded the limitations imposed by the definition of the phoneme as a unit of surface contrast, and allowed for rather more abstract phonological representations, virtually all of the treatment of allomorphy apart from pure suppletion came to fall within phonology. On the other hand, as substantive theories of syntax emerged, these were considered to govern the distribution of morphemes directly, leaving nothing of consequence for a distinct domain of morphotactics. This latter development was really just a turning on its head of the existing view: where structuralists had imagined that morphotactics, once extended above the level of the word, would provide a framework for syntax, the new theories presumed that syntax extended to domains smaller than the word would account for what had been called morphotactics. As a result of these combined developments, though, there was very little of interest left for a morphological theory to account for.

.. Early Transformational Grammar The notion that classical Generative Grammar did not really have a theory of morphology, however, is somewhat oversimplified.7 As a student of Zellig Harris, Chomsky brought with him assumptions about word structure similar to those laid out in Harris , apart from the effort in that work to ground those conceptions in discovery procedures that would lead from a corpus of surface forms to an analysis. In particular, from his earliest significant publication in syntax, Chomsky () assumes that the fundamental units of syntactic analysis (and of the internal form of words) are morphemes, each of which is a link between determinate components of meaning (semantic or grammatical) and a set of their surface phonological instantiations, with explicit references to Harris’s work. Chomsky’s earlier exploration of the morphology of Hebrew (Chomsky  []) led him to maintain a much more complex and abstract notion of the relation between morphemes and their phonological realizations than that of Harris (), but morphemes were nonetheless an unquestioned component of an analysis. Ten Hacken, Chapter  this volume, provides a survey of early generative theory. The discussion of these matters in this section is based on the somewhat fuller treatment in Anderson (). 6 7

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



In the central work of this early period, Chomsky ( [–]) lays out an analysis of the structure of natural language in terms of a number of significant levels, each with its own primitives and systematic connections with adjacent levels. One of these is the level M, where linguistic objects are represented as complexes of morphemes. The elements of M, on the one hand, largely correspond to the terminal elements of Phrase Markers (on the level P). On the other hand, representations on the level M correspond to words (on the level W), ultimately mapped onto phonological form by a system of morphophonemic rules. In this theory, then, morphemes correspond to the objects whose distribution is governed by the syntax. This fact is quite essential, since it is on this that the analysis of English auxiliary sequences by means of the transformation which has come to be known as “Affix Hopping” crucially depends. In that analysis auxiliaries such as Perfect have, Progressive be, etc. are introduced in phrase structure in combination with morphological markers (‑ed ‘Past Participle’, ‑ing ‘Progressive’, etc.). Affix Hopping then permutes these markers with a following verbal element to attach them as suffixes to it, yielding a compact and appealing account of the dependencies within the auxiliary sequence in a way that depends crucially on the ability of syntactic rules to refer directly to morphemes. It is probably no exaggeration to say that this analysis (as it was presented in Chomsky ) was uniquely influential in attracting other linguists of the period to the emerging theory of Transformational Grammar—and to the set of assumptions it relied on. The role of morphemes as syntactic primitives in this view is clear, but it is also compromised to a limited extent in a way that elicited little if any discussion. Chomsky ( [–]: ) notes that some elements on level M play a role in word structure, but not in the syntax: his example is the ‑ess of actress, lioness, mistress, etc. On this basis he distinguishes the set of syntactically relevant morphemes as a level M “embedded into the level [of phrase structure] P”, where the elements of M are a proper subset of the elements of M. The difference between M and the remaining elements of M (e.g. ‑ess) is said to approximate the traditional distinction between inflection and “composition”, thus leaving the door open for a theory of derivational morphology that would not be part of the syntax but rather of word structure, but this possibility was not explored. Syntacticians had more important matters to occupy their attention. Another aspect of morphological theory in this early work, partially taken over from Harris but largely based on his own experience in analyzing Hebrew, was the rather abstract nature of the relation between morphemes and their phonological realizations. While Bloomfield and at least some of the structuralists had seen the allomorphy of individual morphemes as somewhat limited, Chomsky (: , fn. ) explicitly allows for analyses such that “[i]n the morphophonemics of English we shall have rules: wh + he ! /huw/, wh + him ! /huwm/, wh + it ! /wat/”, and the analyses of Hebrew in Chomsky ( []) are even more abstract. There is thus no obvious constraint on the relation between sequences of morphemes and their phonological realization in words. Early generative theory, then, was not without a theory of morphology: a rather rich and substantive theory was in fact taken for granted, though this was based on premises inherited with little examination from earlier structuralist views. The lack of attention to these matters in theoretical discussion was surely a result of the fact that developments in syntax and in phonology were much more dramatic and therefore captured most of the attention. But as a result, a version of the structuralist morpheme and its attendant assumptions about the structure of words persisted into linguistic theory without explicit justification.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

.. The Aspects theory The important reformulation of syntactic theory in Chomsky () also introduced some changes in the treatment of word structure within Generative Grammar, though the assumptions about the internal form of words remain largely the same. One important modification had to do with the place of words in the grammar relative to the syntax. In earlier work such as Chomsky (), words were introduced into sentences directly by rules of phrase structure. In order to capture restrictions on the occurrence of particular words in context (e.g. the fact that know can have a human but not an abstract subject), this required a division among the rules introducing lexical items that corresponds to the division of words into lexical categories and subcategories. Because these categories cross-classify, however, rather than falling into a strict hierarchy, such a set of rules cannot in general be stated in a natural way. To remedy this difficulty, Chomsky proposed that the categorization of lexical items be represented as a set of features they bear, rather than as membership in a phrase structure category; and that the terminal nodes of phrase markers be not lexical items themselves, but rather complex symbols including these same features. An operation of Lexical Insertion then establishes correspondence between words and positions in syntactic structure whose features are compatible with those of the words themselves. A result of this was a revision in the conception of the lexicon: instead of consisting solely of a listing of the language’s morphemes, as was the case on Bloomfield’s view, the lexicon now lists full words. These were still to be entered in general as complexes of morphemes: destruction was to appear as nom ͡ destroy, for example, where nom is a nominalizing morpheme introduced by a transformation converting verbal expressions into nominals. Nonetheless, the presence of whole words in the lexicon opened other possibilities. In particular, Chomsky notes that in attempting to deal with limited productivity in some derivational formations, “it may be necessary to extend the theory of the lexicon to permit some ‘internal computation’, in place of simple application of the general lexical rule” (: ). As remarked below in §.., taking this possibility seriously would have important consequences for both syntax and morphology. For readers today, perhaps the most surprising point found in this work was an argument in the final chapter. Discussing the treatment of inflected forms in German such as (der) Brüder ‘(the) brothers (Gen)’, he compares two possible analyses. One of these regards Brüder as Bruder ͡ DC ͡ Masculine ͡ Plural ͡ Genitive where each of these elements is regarded as a single morpheme (DC being a kind of “class marker”). The other analysis, which he sees as capturing the paradigmatic treatment characteristic of traditional grammars, treats the word as the lexical item Bruder in association with a complex symbol containing features [+Masculine, +Plural, +Genitive,  DC, . . .], with rules of interpretation operating on the word Bruder in the context of these features so as to yield the surface form. He then provides arguments against the morphemic account, and in favor of the paradigmatic alternative that dispenses with inflectional morphemes: For one thing, many of these “morphemes” are not phonetically realized and must therefore be regarded, in particular contexts, as zero elements. In each such case a specific contextsensitive rule must be given stating that the morpheme in question is phonetically null. But this extensive set of rules is entirely superfluous and can be omitted under the alternative paradigmatic analysis. ...

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



More generally, the often suppletive character of inflectional systems, as well as the fact that (as in this example) the effect of the inflectional categories may be partially or even totally internal, causes cumbersome and inelegant formulation of rules when the representations to which they apply are in [the form of morpheme sequences]. However, suppletion and internal modification cause no special difficulty at all in the paradigmatic formulation, Similarly, with morphemic representations, it is necessary to refer to irrelevant morphemes in many of the grammatical rules. . . . But in the paradigmatic representation, these elements, not being part of the terminal string, need not be referred to at all in the rules to which they are not relevant. Finally, notice that the order of morphemes is often quite arbitrary, whereas this arbitrariness is avoided in the paradigmatic treatment, the features being unordered. I know of no compensating advantage for the modern descriptive reanalysis of traditional paradigmatic formulations in terms of morpheme sequences. This seems, therefore, to be an ill-advised theoretical innovation. (Chomsky : f.)

In retrospect, it is possible to see this line of reasoning as following from the picture of morphophonemics presented in Chomsky ( []), but in context it appears somewhat out of the blue. More importantly, perhaps, this approach to inflectional morphology was not followed up either in the syntactic literature or in phonology, where for example Chomsky and Halle () present straightforwardly morphemic accounts of word structure. Perhaps this is attributable to the fact that English, the major focus of work both in syntax and in phonology at the time, is less subject to the difficulties presented by such a representation. Nonetheless, it is quite striking that the arguments in the passage just quoted were not apparently taken seriously in the field until they were rediscovered in later work focused directly on morphological theory.

. T   

.................................................................................................................................. It is clear that generative linguistics as the field developed in the s and s was not without a theory of morphology, although the assumptions about words and their structures were largely implicit and not the focus of much attention in themselves. This began to change somewhat, however, as morphological issues came to attract attention in themselves, initially in connection with the structure of the lexicon.

.. Lexicalism in syntactic theory In the fall of , Chomsky gave a series of lectures at MIT (later published as Chomsky : henceforth, “Remarks”) whose primary focus was on restricting the power of transformational operations in syntax, limiting the semantic adequacy of underlying forms, and in general countering the proposals of the emerging theory of Generative Semantics. For our purposes, the primary interest of the program initiated there lies in its consequences for morphology and the theory of the lexicon.8 Montermini, Chapter  this volume, provides a more general survey of the origins and development of Lexicalism. 8

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

In previous work, as noted in §., Chomsky had assumed that nominalization constructions were uniformly to be derived in the syntax, with an underlying verbal structure transformed by the introduction of a nominalizing morpheme which served both as the source of the surface morphology and as the trigger for the shift of category membership. In “Remarks”, however, he discusses significant differences between two types of nominalized constructions in English: gerundive nominals such as John’s refusing the offer and derived nominals such as John’s refusal of the offer. He notes a number of ways in which the internal structure of nominals of the latter type reflects basic structural characteristics of NPs, as opposed to the gerundive nominals, whose internal structure is much more like that of sentences. On this basis, he suggests that “we might allow the base rules to accommodate derived nominals directly”. That is, we could allow the relation between Verbs and associated derived nominals to be described not in the syntax, but rather in the part of the grammar that is responsible for providing words to the syntax: the lexicon. On this view, while refusal is of course related to the verb refuse, that relation is established within the lexicon; as far as the syntax is concerned, it is nothing but a noun, and thus naturally appears as the head of structures with all of the characteristics of NPs. This looks at first glance like a rather modest suggestion about the analysis of a single construction in English, but in fact it had much broader ramifications. Picking up on the suggestion in Chomsky  about “extend[ing] the theory of the lexicon to permit some ‘internal computation’ ”, the proposed account of derived nominals introduces a new class of rules into the theory of grammar. Such lexical rules would not be part of the syntax, but something else: an entirely distinct class of rules, rules of word structure. There is little said in “Remarks” about the specific form and distinctive properties of such rules, but other writers took up this challenge in more detail (e.g. Jackendoff ; Wasow ; Anderson a) in ways that eventually led to renewed interest in morphology as a distinctive aspect of grammatical theory. The thrust of that work was the recognition of principled differences between (morphological) rules that govern the internal form of words and (syntactic) rules that group words into phrases. This contrast led to a distinctive approach to syntax (and morphology) known as “Lexicalism”, grounded in the distinction between lexical processes relating words to one another and syntactic processes governing phrase markers. Various implementations of the basic underlying principle have been explored, some of which are described in other chapters of the present volume. One particularly restrictive formulation is the Lexical Integrity Hypothesis, according to which the syntax neither manipulates nor has access to the internal form of words. On this view the only way the syntax can affect the form of a word is through manipulation of the complex symbol with which lexical insertion has associated it, while the only aspects of a word’s structure that are accessible to the syntax are those reflected in the featural content of that complex symbol. Some implications of this approach to grammar are discussed by Lieber and Scalise (), a review that also summarizes evidence which they feel might compromise the hypothesis in its strong form.

.. Generative Morphology comes into its own The Lexicalist literature brought morphological issues into consideration from the point of view of syntactic theory, but the proposals of Halle () represented a more direct

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



re-emergence of morphology as a separate area of inquiry in grammatical theory, not simply a sub-field of syntax or phonology. Halle begins from the fairly straightforward neo-Bloomfieldian assumption that the grammar of a language “must include a list of morphemes” including both roots and affixes “as well as rules of word formation or morphology”. He assumes that in principle, the rules for combining morphemes apply freely, thus accounting for all of the productive formations in the language, but he supplements these components of the grammar with a “dictionary”, or list of all of the actual words. Since idiosyncratic information can be associated with entries in the dictionary, this allows for the description of non-compositional aspects of word meaning and form. By serving as a filter on the output of the processes of morpheme composition, the dictionary also provides an account of non-productive formations: all possible combinations of listed morphemes can be built, but only those combinations listed in the dictionary are available for use by the syntax. Given the apparent duplication in this view of information represented by the effects of word formation and information directly encoded in dictionary entries, it was not directly pursued, but Halle’s consideration of whole words (and not only morphemes) as the potential locus of significant information was taken up by Aronoff () in a widely read work that attracted much attention to morphology as a distinctive field of inquiry. Continuing Halle’s assumption of a list of full words in the grammar, the major innovation in Aronoff ’s work was the notion that the “rules of word formation” were not rules combining morphemes from a separate list into larger structures, but rather rules relating (structurally defined classes of) words directly to one another. This rests in part on the observation that some classes of words that should be regarded (for phonological purposes) as internally complex cannot be sensibly constructed from independently listed meaningful parts: for example English prefix+stem structures such as deceive, receive, perceive, etc. In addition, some words with idiosyncratic, holistically assigned meanings (e.g. transmission ‘the gearbox that uses gears and gear trains to provide speed and torque conversions from a rotating power source to another device’ [Wikipedia]) should nonetheless be regarded as consisting of multiple parts (/trans+mit+ion/), despite the fact that the combination of those parts is not compositional. While listing items in the lexicon as whole words, Aronoff retains the assumption that these are structured combinations of morphemes, and that this structure can be referred to by word formation rules. This is apparent in the claim that one of the operations word formation rules can perform is the deletion (“truncation”) of a specific morpheme in conjunction with the addition of another morpheme, as in formations such as nominee from nominate, operable from operate, and others where a morpheme –At is apparently removed from a verbal stem in association with the addition of other elements. Another strand of thinking with respect to word formation emerged not from the mainstream generative literature, but rather from the effort to incorporate insights from traditional treatments into contemporary thinking. Matthews () discussed the approach of “Word and Paradigm” analysis, a view grounding morphological structure directly in the relations among words within an inflectional paradigm, making no direct and essential reference to internal components such as the structuralist morpheme. Matthews () develops a theory of inflectional structure more generally on a similar basis, presuming that words have an abstract representation in terms of a set of properties related in regular ways to the phonological realization of surface indicators of those properties, in ways that again do not

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

rely on the assumption of anything like morphemic structure. Anderson () pursued an approach to inflection converging with this, grounded in the proposals of Chomsky () discussed above in §... This line was further extended to derivation and other types of word formation in Anderson , where the view of morphology as being in general “a-morphous” (in the sense of not being based on morphemes) was argued. Similar approaches to inflection and other areas have been pursued by various scholars. The approach to morphology initiated by Halle and Marantz (), “Distributed Morphology”,9 in contrast, is strongly committed to a morpheme-based view, and to the pre-Aspects conception of morphological combination (morphotactics) as entirely the responsibility of the syntax. On the other hand, this theory also provides for a rich morphological component that takes the result of syntactic formations as its input and manipulates this in a variety of ways prior to phonological realization. As a result, the relation between the output of the syntax and the input to the phonology is effectively unconstrained, and the substance of the claim that the syntax is responsible for forming morphologically complex words is unclear. Further discussion of these matters is beyond the scope of the present article, however. As far as the history of morphology is concerned, an important consequence of the emergence of Distributed Morphology is the fact that substantial numbers of syntacticians have adopted that theory, and with it, an understanding that morphology is indeed something to be taken seriously.

. C:    

.................................................................................................................................. By the s and s, as a result of developments including those discussed in §.., morphology had emerged from under the waters of syntax and phonology and once again taken a place as a legitimate—and substantive—domain of inquiry within grammatical theory. This was further confirmed by the appearance of textbooks specifically devoted to morphology, such as Scalise  and Spencer ; dedicated journals such as the Yearbook of Morphology (first published in ; later renamed as Morphology); handbooks such as Spencer and Zwicky , specialized conferences, and other manifestations. As with any part of the field in which there is considerable activity, there is also substantial diversity of opinion about the appropriate approach to morphology. A basic division goes all the way back to the difference suggested in the introduction above between the views of the two de Saussure brothers. Some theorists see the analysis of words as basically a matter of the analysis of minimal meaningful pieces of form—morphemes—as René de Saussure did, and the principles for combining these elements as essentially continuous with the regularities of the syntax. Such “syntax of words” theories would include work by scholars such as Selkirk (), Lieber (, ), and Williams (), among others. The alternative on which morphology is described directly as a matter of relations between words not mediated by combination of basic atomic elements like

9

See Siddiqi, Chapter  in the present volume.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



morphemes is typified by the proposals of Zwicky (a), Anderson (), Stump (), and others. As already noted in the introduction, Stump () distinguishes theories of the former kind, based on the combination of morphemes or similar elements drawn from an inventory or lexicon, as lexical theories, as opposed to inferential theories, which treat “the associations between a word’s morphosyntactic properties and its morphology” as “expressed by rules or formulas”. He also introduces an orthogonal distinction between incremental theories, on which a word bears a given content property exclusively as a concomitant of a specific formal realization, and realizational theories, on which the presence of a given element of content licenses a specific realization, but does not depend on it. Most lexical theories tend to be incremental in nature, but Distributed Morphology, for example, (by virtue of the complexity of the relation between syntactic output and phonological realization that it posits) fits the characteristics of a lexical realizational view. Filling out the typology, Steele () argues for a view that is inferential but incremental in nature, though that approach has not been substantively pursued in the subsequent literature. More explicit characterizations of theories such as those discussed above and others is the business of other chapters in the present Handbook, especially those of Part II, and will not be essayed here. What is reasonably clear is that theoretical inquiry into the principles governing the structure of words and relations among them is a matter that extends back into the earliest periods of the systematic study of language. While that subject matter has at times been subordinated to other areas of grammar, morphology is alive and well as a distinct field of study within contemporary linguistics.

A I am grateful to the editors and two referees for comments. The present chapter is almost entirely devoted to the line of development, primarily in America, that has led to the views on morphology associated with Generative Grammar, generously construed. Other chapters in Part II of the present Handbook present a broader range of theories in more detail, including some (such as “Natural Morphology”, by Gaeta, Chapter  of this volume) that fall largely outside these geographical and theoretical bounds. For some remarks on the nature and development of the notion of “morpheme” in a variety of traditions, see Anderson (b).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

                    ......................................................................................................................

 

. I:          ?

.................................................................................................................................. I makes sense to begin an overview chapter such as this with a ‘big picture’ question: what should a theory of word formation do, and how far has morphological theory gone in the past towards fulfilling that goal? For the most part, this chapter is intended to review a variety of frameworks and specific theoretical concerns that have occupied morphologists over the last six or so decades, but it also seems important to take the opportunity to assess what we have achieved over this period. Not all readers will agree with my assessment, but to the extent that it stimulates discussion about goals, the results can nevertheless be positive. First, to be clear about my subject matter, I take word formation to refer to derivation and compounding, in other words to those processes that give rise to new lexemes and the semantic categories which those new lexemes instantiate, as opposed to inflection. Derivation can be effected by such formal operations as affixation, conversion, internal stem changes, and various sorts of non-concatenative processes, including reduplication, root and pattern, and subtractive processes.1 The formal processes involved in derivation may of course figure in inflection as well, so it is important to delineate at least some functions and/or semantic categories that are typical of word formation: transposition (that is, change of category such as noun to verb, adjective to noun, and so on with little or no semantic addition beyond what results from the change of category2), creation of agent, patient, and other participant forms, formation of evaluatives and terms denoting collectives, abstracts, negatives and privatives, spatial, relational and temporal forms, various aspectual 1

What counts as non-concatenative is, of course, in part a theory-internal matter. In the work of McCarthy (, ) and Marantz (), reduplication and root and pattern processes are nonconcatenative. 2 Generally, the semantic effects of transposition have not been extensively studied, but see Spencer () for some attention to the issue.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



categories of verbs, and so on. As ten Hacken () shows, it is not always clear where to draw the line between inflection and derivation, and indeed, in the eyes of some theorists, whether to draw such a line at all. Nevertheless, we will proceed on the assumption that the division makes sense. What then should be the overarching goal of a theory of word formation? From the perspective of a generativist like myself, the obvious answer would be to characterize the workings of the mental lexicon, by which I mean something fairly wide-ranging.3 In the early days of generative morphology, the salient question for the theorist would have been “What are the mechanisms available to a native speaker of a language that allow her to produce and understand a potentially infinite number of new lexemes in that language?” But that characterization has a number of limitations. First, it puts the emphasis on the formal means of creating new lexemes and obscures questions of semantics. More seriously, it implies that the important questions in morphological theory are parallel to those in syntactic theory, namely the generation of forms, without considering whether the parallel is apt. I would argue that it is not, at least not entirely. We now know that there is an important difference between syntax and morphology: whereas storage plays at most a small role in syntactic theory, the question of storage is of serious importance in morphological theory, especially with regard to word formation. When we talk about the workings of the mental lexicon, we must consider not only the generation of forms but also their storage. And with storage, in turn, come issues of lexical access, of frequency effects, and of local analogy that do not arise in syntactic theory. In §. I will begin by reviewing issues concerning the formal nature of word formation: the relevant units of analysis, the form of morphological rules and processes, the treatment of non-concatenative word formation. Section . takes up issues concerning the relationship of morphological theory to syntax and phonology. In §., I will raise the issue of the relationship between theoretical treatments of word formation and accounts of lexical semantics. In §., I will survey a variety of narrower issues that have been prominent in the theory of word formation over the years: bracketing paradoxes, headedness, the question of affix ordering, and derivational paradigms.

. T  

.................................................................................................................................. One of the most basic issues with which the theory of word formation is concerned is the nature of the units that the theory manipulates. If we begin by looking at a complex word such as operationalize in English, a linguistically naïve but literate speaker of English might be inclined to divide this word into several pieces: ()

3

operate + ion + al + ize

This is, of course, not the only possible answer to this question. For example, from the point of view of the onomasiological tradition, the overarching goal driving the theory of word formation is the characterization of the naming process, that is, the delineation of how concepts come to be instantiated in word forms. See Štekauer (a) for an explanation of this perspective.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

The separated parts of the word are meaningful units and illustrate examples of what linguists call morphemes, units that are traditionally defined as the smallest meaningful units of which words are composed. Morphological theories that accept the notion of the morpheme as a unit generally also assume that morphemes must be assembled or put together by rules of some sort that create complex forms with hierarchical structure. Such theories have come to be called, following Hockett (), Item-and-Arrangement (IA) theories. The IA paradigm lends itself to looking at word formation as a species of syntactic operation. IA theories of word formation were prominent in generative theory in the s and s, including the work of Lieber (, ), Selkirk (), and Di Sciullo and Williams (), among others. Although these theories differ in detail, they share the basic assumption that morphemes are stored in the mental lexicon and assembled by a set of rules modeled on the phrase structure rules of generative grammar, as illustrated by the rules and structure in (): () a. Lexical entries for morphemes operate [verb] -ion ]verb]noun -al ]noun]adj -ize ]noun/adj]verb

b.

V A N V operate

ion

al

ize

Pretheoretically, the notion of ‘morpheme’ seems relatively straightforward, especially in view of the simple example in (), but discussions of the concept going back to the s in the work of Bloch (), Hockett (), Nida (), and Harris (), as well as much subsequent work, including Anderson (, ), Beard (), Aronoff (), among many others, have shown that it is not. The IA paradigm is easily applied to strictly agglutinative word formation like that exemplified in (), but rapidly becomes problematic when we look at derivation that involves internal stem change (a), reduplication (b), subtraction (c), or root and pattern relationships (d): ()

a. Internal stem change (Anywa, from Dimmendaal : ) Root Pluractional stem cac- caaɲɲ- ‘look for’ cak- caaŋŋ- ‘name’ b. Reduplication (Mokilese examples from Inkelas , taken from Blevins : ) pↄdok pↄd‑pↄdok ‘plant/planting’ soorↄk soo‑soorↄk ‘tear/tearing’

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



c. Subtraction (Koasati, from Martin : –) Singular Plural pitáf-fi-n pít-li-n ‘to slice up the middle’ misíp-li-n mís-li-n ‘to wink’ d. Root and pattern4 (Arabic, from Davis and Tsujimura : ) Verb Template Gloss Passive qatal CVCVC ‘kill’ qutil qattal CVCCVC ‘massacre (intensive)’ quttil qaatal CVVCVC ‘battle one another (reciprocal)’ quutil In examples like those in (), there are no obvious ‘pieces’ or ‘things’ that can be isolated to which we could assign a meaning like ‘pluractional’, ‘plural’, ‘nominalizer’, ‘intensive’, or ‘reciprocal’. Even in a language like English that overall lends itself to an IA treatment, there are a number of aspects of word formation that challenge the classic definition of the ‘morpheme’. For example, English is rife with words of Latinate origin like report, deport, import, export, purport, transport, or report, remit, receive, regress, resist, refer which many native speakers perceive to be complex. That is, many speakers can segment them into two pieces (that is, re + port, de + port, etc.), although it is clear that the pieces exhibit no apparent constant meaning (see also operate in (), which could be analyzed by some as oper-ate). Similarly, in English we often find what Bauer, Lieber, and Plag (: ch. ) call ‘extenders’, by which they mean meaningless elements that appear under various circumstances between a base and an affix, for example, the ‑n- in Plato-n-ic or tobacco-n-ist. These too pose problems for the classic definition of morpheme. One possible alternative to postulating morphemes as the minimal meaningful units of which words are constituted is to view complex words as indivisible, but related to each other and to their bases via formal operations which are more akin to phonological than to syntactic rules. This general framework has been called an Item-and-Process (IP) approach (Hockett ),5 and has figured prominently in the work of Aronoff () and Anderson (, ), among many others. Returning to our initial example, a form like operationalize would be constructed by a sequence of operations of the following sort: ()

XVerb ! XionNoun XNoun ! XalAdj XNoun/Adj ! XizeVerb

operate ! operation operation ! operational operational ! operationalize

4 Root and pattern word formation is alternatively referred to as ‘templatic’ word formation, a term which I avoid here as it invites confusion with a sort of word formation found in highly polysynthetic languages in which affixes are ordered according to a series of slots or ‘templates’. See for example, Vajda () for a treatment of the Yenisean language Ket in terms of templates in this sense. 5 Hockett () outlines a third framework called Word and Paradigm (WP) which I will not discuss here, as it is more pertinent to theories of inflection than to theories of word formation; this model is especially prominent in the work of Matthews () and Anderson (). See also Blevins, Ackerman, and Malouf (Chapter  this volume) and Stump (Chapter  this volume).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

In their strictest forms IP theories claim that derived words have no internal structure, so that operation, operational, and operationalize are internally no more complex than operate, a point to which we will return below. IP theories have the advantage that they are more easily able to account for the sorts of word formation illustrated in (). Whereas it is prima facie difficult to isolate anything like morphemes in cases of internal stem change, reduplication, root and pattern word formation, and subtraction, all of these can be easily modeled as processes. For example, the pluractional stem formation rule in Anywa illustrated in (a) can be stated as in (): ()

CVC ! CVVNN where N stands for a nasal consonant of the same point of articulation as C in the base.

Rules in IP frameworks can manipulate distinctive features for internal changes, and repeat or delete segments in cases of reduplication, root and pattern, or subtractive morphology. However, this is not to say that IA frameworks are in principle incapable of accounting for so-called non-concatenative processes such as those illustrated in (). Indeed, a prominent line of research beginning with the work of McCarthy (, ) has developed the idea that morphemes can be units consisting of elements both above and below the level of the individual phonological segment. In cases of internal vowel or consonant change, morphemes can consist of individual distinctive features that get associated with a base. Lieber (, , ) treats both consonant mutation and umlaut in such terms. For reduplication, morphemes can consist of timing units (C and V stripped of other distinctive features) (Marantz ) or prosodic constituents (McCarthy ) that are then associated with segmental material from the base, and subtractive morphology can consist of association of the segmental material of a base to a prosodic constituent smaller than that of the original (see, e.g., Lappe , on hypocoristics in English). In other words, this line of research argues that the units involved in so-called non-concatenative word formation may not look like classic morphemes, but that they can nevertheless be characterized as minimal meaningful units. The interesting thing about this long-standing theoretical debate is that in the end IA and IP treatments of word formation (at least these early ones) do not end up too far from each other. IP claims to treat both concatenative and non-concatenative word formation in the same way are matched by IA claims that the same can be true in a framework that embraces the notion of the morpheme, as long as the morpheme can be composed of both sub-segmental units such as distinctive features and supra-segmental units such as timing units, syllables, or other prosodic constituents. One of the very clear differences between IP and IA theories was, however, alluded to above, namely that derived words in IP frameworks explicitly lack internal structure. The prediction is that rules of word formation should never have to refer to the internal structure of their bases. This claim has been disputed, however, in the work of Carstairs-McCarthy () and Booij (b, a, ). Bauer, Lieber, and Plag () also point out facts from English that call the prediction into question, namely that there are affixes in English (‑ancy/‑ency, ‑ine, ‑enV, ‑let, ‑ster) that never attach to complex bases, and that therefore must have some way of ‘seeing’ whether a word is simple or complex. The preceding characterization of theories of word formation is somewhat simplistic, however. In assessing theories of inflection,

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



Stump (: –) outlines a taxonomy that involves two intersecting axes that together define four sorts of theory (see also Stump, Chapter  this volume). With respect to one axis, frameworks can be either  or . Lexical theories assume that inflections like English plural ‑s are morphemes that have their own entries in the mental lexicon, just as lexemes do, whereas in inferential models inflections are introduced by rules and have no lexical entries of their own. This is essentially the distinction I outlined above between IA and IP theories. Orthogonal to this distinction is Stump’s distinction between  and  frameworks. In the former “words acquire morphosyntactic properties only as a concomitant of acquiring the inflectional exponents of those properties” (: ); in other words, the rule or morpheme that introduces ‑s also introduces the morphosyntactic feature [+plural]. In the latter, a base is associated with a set of morphosyntactic properties, in this case [+plural], and this in turn allows the association of that base with a formal exponent of those morphosyntactic properties, in English typically the exponent ‑s, but in certain cases ‑en, ‑ren, etc. Stump’s insight is that the lexical/inferential and incremental/realizational distinctions allow us to characterize four separate types of framework, each of which is instantiated in specific theories of inflection: ()

a. Lexical/Incremental: classic IA theories like those of Lieber (, ), Selkirk (), etc. b. Lexical/Realizational: Distributed Morphology (Halle and Marantz ; Siddiqi, Chapter  this volume). c. Inferential/Incremental: Articulated Morphology (Steele ). d. Inferential/Realizational: Word and Paradigm Morphology (Anderson ; Blevins, Ackerman, and Malouf, Chapter  this volume), Paradigm Function Morphology (Stump , and Chapter  this volume).

What is of interest to us here is not the way that these combinations play out in the realm of inflection—Stump argues that inflection is best served by theories that are inferential and realizational—but how (or if ) they play out in terms of theoretical models of word formation. The IA versus IP discussion in essence focuses on the first of these axes (Lexical versus Inferential). Interestingly, there has been somewhat less explicit discussion in the literature of how the Incremental versus Realizational distinction plays out in terms of theories of derivation. It is to this point that I turn now. Let us take as our starting point a highly productive English affix like the ‑er that occurs in forms like writer, freighter, computer, villager. In an Incremental theory, we would either have an affix with an entry in the lexicon or a rule that introduces Xer. Either would also indicate both the category of the base and the categorial and semantic result of adding to the specified base. Both classical IA and IP frameworks are incremental in this sense, the Lexical version characterizing IA, the Inferential version characterizing IP. In the Realizational models, the ‑er affix would again either have a lexical entry or be introduced by a rule of some sort, but what would differ is the information available in the entry or rule; specifically, the entry or rule would lack the derivational equivalent of ‘morphosyntactic’ properties. Rather, such properties would be a function of either the tree into which the affix is inserted in the Lexical version (for example in the framework of Distributed Morphology (Siddiqi, Chapter  this volume)) or the rule that effects the creation of specific word forms in the Inferential version (e.g. Lexeme Morpheme Base Morphology).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

There are two issues that arise in realizational treatments of word formation that do not arise in incremental treatments. The first is how (or whether) to determine which of a number of potential affixes gets inserted into the tree or added by the word formation rule. Whereas for any given base, there is generally a single exponent for any given morphosyntactic constellation of features in inflectional paradigms, there are often a number of competing exponents in derivational categories. For example, alongside ‑er, English has at least ‑ant and ‑ist as well. Bases are generally conventionalized with one or another of these affixes, but need not be.6 A realizational model must have a plausible story about what determines the insertion of any given exponent into a particular tree. Perhaps more vexing, however, is the question of what the derivational equivalent of ‘morphosyntactic’ features would be, that is, what kind of semantic representation the lexical affix is associated with in the tree or rule. Realizational theories of word formation by definition adhere to what Beard () has called the Separation Hypothesis, where morphosyntactic or semantic properties of words are introduced separately from their phonological exponents. But adherence to the Separation Hypothesis makes it imperative that we be able to characterize what those morphosyntactic or semantic properties are. There is some consensus about what basic morphosyntactic features are of relevance to inflection (features of number, tense, aspect, person, etc.), but to date there is little systematic discussion of what range of distinctions are relevant in word formation and how they are to be represented. In Lexeme Morpheme Base Morphology, Beard (: ch. ) assumes that the same set of morphosyntactic features that figure in case-marking would also be available for characterizing derivation, so that one morphological rule in English would introduce a feature like AGENT which would eventually be matched for any given base with ‑er or ‑ant. But Beard does not discuss the full range of derivational possibilities such as derived verbs and adjectives of various sorts, evaluative morphology, or quantitative or relational affixation, for example. Therefore the actual execution of such a featural system for derivation is left to the imagination (see Stump, Chapter  this volume). Distributed Morphology (henceforth DM) also does not fare particularly well in this regard, first because there is relatively little discussion of derivation in DM in the literature, but more importantly as there is almost no explicit development of the semantic or ‘morphosyntactic’ properties of derivational word formation. Where derivation is discussed at all, affixes are inserted under functional projections of various sorts or “flavors”, as Harley (: ) puts it. She points out in passing that functional heads that come to dominate affixes will have to be specified for semantic information, but characterizes this information only informally: verbal functional heads, for example, can come in varieties like CAUSE, BECOME, and DO, and adjectival functional heads in varieties like ‘characterized by’, ‘able to’, or ‘like’. The nature of the functional projections that would be needed to characterize the breadth of the derivational system of any language has yet to be discussed systematically. As Baeskow (a, b) points out, where derivational morphology is discussed at all (e.g. Volpe ; Siddiqi ), the subject tends to be transpositional morphology in which the semantic contribution of the affix (beyond the category change) is arguably small; as she points out, DM fares less well when faced with

6

At least this is the commonsense understanding. We will return to this issue in §.. on Blocking below.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



semantically robust category-changing affixes like English ‑er, and even worse with non-category-changing affixes like English ‑dom, ‑hood, and ‑ship. Where does this leave us? One conclusion that might be drawn is that both lexical and inferential approaches to word formation have their strengths. Although lexical analyses are more easily applied to concatenative and especially agglutinative word formation and inferential analyses to non-concatenative processes, either approach can be adapted to either sort of formal process. The choice between incremental and realizational approaches, however, hinges on a subject that has received fairly little attention in the literature, namely, the semantic side of word formation. Indeed, one of the few attempts to provide a sustained and thorough account of the lexical semantics of word formation is found in the work of Lieber (, ). I will return in §. to the place of lexical semantics in the study of word formation. The issue of how a semantic analysis of word formation is to be paired with the formal analysis has yet to be broached systematically, although see Lieber (a) for a preliminary discussion of the problem.

. C:        

.................................................................................................................................. Theoretical issues that confront the study of word formation of course go beyond the nature of the units and the formal means by which complex words are to be generated. Morphology cannot be studied apart from other aspects of language like syntax, phonology, semantics, and pragmatics. Discussion of the interface of morphology with these components has largely centered on syntax and phonology, so these are the ones I will focus on here.

.. The interface with syntax The historical development of morphology as a distinct field within generative grammar is a subject that has been well covered in the literature (see, e.g., Spencer ; Štekauer a; Scalise and Guevara ; Lieber and Scalise ; Anderson, Chapter  this volume; ten Hacken, Chapter  this volume; Montermini, Chapter  this volume), so I will not review it in detail here. Rather, I offer a brief synopsis in order to draw out the central issues that are at stake in this debate. Most theorists would agree that the critical moment in this history was the publication of Chomsky’s () ‘Remarks on Nominalization’. Before this watershed article, to the extent that morphology was studied at all among generativists, it was treated as a sub-field of syntax with what we would now recognize as matters of word formation—nominalizations, compounding—analyzed as the end result of transformational rules: Lees’ () The Grammar of English Nominalizations is a prime example of such work, but Zimmer () and Brekle () might also be mentioned as examples of work in this vein. Whether or not Chomsky intended it this way, ‘Remarks’ was in effect taken as a manifesto for linguists to pay attention to morphology on its own terms. One way of doing this was to propose a distinct component for morphology apart from syntax; in the theoretical terms of the day, morphologists sought to claim their own turf. The formal secession of morphology from

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

syntax was codified in various principles that can be grouped together under the rubric of the Lexical Integrity Hypothesis: among these are the Generalized Lexicalist Hypothesis (Lapointe ); the Word Structure Autonomy Condition (Selkirk ); the Atomicity Thesis (Di Sciullo and Williams ), all of which differ in detail, but not in spirit. The essential issue in the lexicalist debate was the extent to which rules of morphology and rules of syntax can interact with each other. In frameworks that adhere to the strictest form of the Lexical Integrity Hypothesis, rules of morphology cannot manipulate or ‘see’ elements of structure above the level of the word and rules of syntax cannot manipulate or ‘see’ elements below the level of the word. In weaker versions of the Lexical Integrity Hypothesis, word formation is treated as the domain of the lexicon, separated from the syntactic component, although inflection is treated as a part of syntax (Anderson ). Adherence to any form of the Lexical Integrity Hypothesis, of course, does not prevent us from postulating morphological rules that mimic or look like syntactic rules, as long as they are not syntactic rules per se. Theoretical frameworks like those of Lieber () or Selkirk (), or treatments of compounding like those found in Roeper and Siegel (), are Lexicalist in the sense that syntactic rules and morphological rules do not interact, but in terms of the formal nature of morphological rules themselves they all take their cue from the syntactic theories of their day. Lieber () and Selkirk () propose various versions of phrase structure-like rules to form complex words, and Roeper and Siegel () propose a lexical transformation to account for synthetic compounds. By the mid s, however, it was already quite apparent that there were types of data that called into question even the weaker versions of the Lexical Integrity Hypothesis. That is, attention began to be drawn to phenomena like so-called phrasal compounds (), phrasal affixation (), or affixal conjunction () that at least prima facie suggest some interaction of syntax and morphology: ()

Phrasal compounds in English (examples from COCA7) this-person-is-a-jerk attitude shouted-into-ears conversation dad-needs-a-sportscar syndrome

()

Phrasal affixation in English (examples from COCA) one-step-behind-hood up-your-butt-ish I-can-do-it-too-ness

()

Affixal conjunction (examples from Spencer d: ) pro- as opposed to anti-war hypo- but not hyper-glycemic

Phenomena such as these are not unique to English of course: Savini (), Toman (), Hoeksema (), Lieber (, ), and Lieber and Scalise () offer similar examples from Afrikaans, Dutch, German, Italian, Spanish, and Japanese.

7

COCA is the Corpus of Contemporary American English (corpus.byu.edu/coca).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



Three sorts of responses have arisen to such empirical challenges to the Lexical Integrity Hypothesis. The first involves arguments to the effect that phrasal compounding and phrasal affixation are only an apparent challenge to the Lexical Integrity Hypothesis because the empirical evidence is unconvincing. Analyses like those in Bresnan and Mchombo () and Wiese () try to argue that the phrasal element in such formations is strictly limited to lexicalized units or quotative contexts, and therefore that real access to syntactic operations is not necessary for morphological processes; to the extent that phrasal material appears in complex words, the phrases involved can be assumed to have some representation in the lexicon. But the corpus data in () and () suggest that this cannot be correct. The choice of phrasal element in such forms is apparently quite free. For those who accept that the data pose a real challenge, two responses are possible. On the one hand are theoretical frameworks that return to the pre-‘Remarks on Nominalization’ assumption that word formation is syntax: among these are Lieber (), Halle and Marantz () and DM in general, and most recently Borer (). On the other hand are frameworks that allow for limited interaction between morphology and syntax, among them Lieber and Scalise () and Booij (a). The question that we need to consider is once again what is really at stake in this debate. Early on, the Lexical Integrity Hypothesis amounted largely to a claim to legitimacy for the study of word formation within generative grammar. But the issue of legitimacy is no longer in question. In syntax-only theories, the appeal has always been to theoretical elegance: why postulate different sets of rules when one system might work for both morphology and syntax? But there is more at stake in the debate now than theoretical elegance: the real question is the extent to which words behave differently from phrases and sentences.8 We should ask whether the distinction between words on the one hand and phrases and sentences on the other is categorical, gradient, or non-existent. The sort of data in ()–() suggest that there is no strict categorical distinction, but I will argue that the choice between the latter two options cannot be based entirely on formal grounds or theoretical elegance, but might rather hinge on psycholinguistic issues such as the relative balance of rules and storage, the effects of frequency and analogy, and so on. We will return to these issues in §§. and . below.

.. The interface with phonology At about the same time that morphology was establishing itself as a legitimate subject for study in its own right, a separate thread of theoretical development began to explore the relationship between morphology and phonology. Arising from the treatment of phonological and morphological boundaries in Chomsky and Halle (), the framework of Lexical Phonology and Morphology looked at the interaction of rules of phonology and morphology. Again, the history of this development is available in such works as Spencer (), Kaisse and Hargus (), Giegerich (), Kaisse (), among others, so I will only briefly outline the main issues here.

Borer (: ) also appeals to theoretical rigor as an advantage of syntax-only accounts, although without comparing them to any specific lexicalist analysis. 8

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Chomsky and Halle () note that different derivational affixes in English exhibit different phonological behavior. Some affixes are associated with changing the stress pattern or segmental makeup of the bases to which they attach (or both), while others have no effect on either the stress pattern or segmental makeup of their bases; the former are associated with a + boundary in the theory of Chomsky and Halle () and the latter with a # boundary. The boundary distinction of Chomsky and Halle () is, however, not particularly explanatory: the real issues are why these affixes display different behavior, how this behavior is to be modeled in the grammar, and what sorts of predictions can be derived from the model. The framework of Lexical Phonology and Morphology, developed in the work of Siegel (), Allen (), Kiparsky (b, ), Selkirk (), Rubach (), Halle and Mohanan (), Mohanan (), among many others, is an effort to answer these questions. Details differ from one theorist to the next, but the overarching idea behind the framework is largely the same, namely that rules of morphology are to be divided into levels or strata with each level associated with a specific block of phonological rules. Within each level we normally expect to find cyclic application of affixes and phonological rules where their environments are met, but once we leave a particular level we are in principle unable to revisit it. In English, Level  morphology and phonology contains roughly the + boundary affixes of Chomsky and Halle () as well as irregularly inflected forms and phonological rules that assign stress and make segmental changes of various sorts to bases. Level  contains the # boundary affixes and other phonological rules, but crucially not the phonological rules of Level , so that stress, once fixed at the earlier level, cannot be affected by further affixation at the later level. The placement of compounding and regular inflection differs from one model to the next, as does the number of levels proposed for any given language, but in general compounding and regular inflection follow derivational affixation. The theory of Lexical Phonology and Morphology makes clear predictions that Level n morphology in any given language should not be found outside Level n+ morphology, and generally that derivational affixation should not be found outside compounds. Similarly, regular inflection should not be found inside either derived words or compounds. These predictions turn out to be problematic for a number of reasons, however. Gussmann (), for example, points to the cases where Level n affixes indeed do appear outside Level n+ affixes, and as a result to the necessity for allowing loops between levels. He also points out that Lexical Phonology and Morphology has no explanation for the observation that even within a single layer there can be restrictions on the ordering of various affixes with respect to one another. Most tellingly, he identifies the general problem that treatments of Lexical Phonology and Morphology tend to weigh in far more heavily on the discussion of phonology than on the discussion of morphology. With regard to English, only a small subset of morphological phenomena is ever treated in any given work, and where specific affixes are explicitly assigned to Level  or Level , which affixes are assigned to which level seems to differ from one work to the next (compare Selkirk ; Kiparsky b; and Spencer , e.g.); no work in this tradition gives a systematic or comprehensive overview of morphology. What does emerge from the tradition of Lexical Phonology and Morphology, however, is the need to look more carefully at the issue of affix ordering, a subject to which we will return in §... Interestingly, while the issue of what governs the ordering of affixes has continued as an important matter of theoretical discussion since the heyday of Lexical Phonology and Morphology, it has largely been separated from the issue of the ways in which morphology

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



can trigger phonological rules of various sorts. This line has received somewhat less attention, although it has been pursued in the work of Inkelas () and Inkelas and Orgun (). They argue for what Inkelas and Orgun (: ) call ‘Co-Phonologies’, namely “the phonological mapping associated with a given morphological construction”. More recently, Bauer, Lieber, and Plag (: ch. ), in a detailed treatment of the morphology of English, come to the conclusion that “each derivational category comes with its own set of phonological restrictions and properties, and that the stratal division of the lexicon is at best a gradient in psycholinguistic or statistical terms (Hay ; Plag and Baayen ) and at worst non‑existent in structural terms . . . ” (Bauer, Lieber, and Plag : ). Indeed, they give extensive support to the idea that each affix in a language can potentially be subject to a different array of phonological behaviors.9

. M    

.................................................................................................................................. While issues of the formal nature of rules and the relationship between morphology and other components of grammar have received extensive attention over the last three decades, issues pertaining to the semantics of word formation have been given far less attention. In this section I will outline some of the theoretical issues that arise in the study of the semantics of word formation, looking first at the semantics of derivational affixes and then at compounds.

.. Derivation Until relatively recently, little attention has been paid to the systematic modeling of the semantics of derivation, and what has been done has concentrated on the semantics of affixation. There is no doubt much to be said about the semantics of non-concatenative processes, but I will make the assumption here that the semantic properties of such formal processes are not in principle different from those of concatenative processes, and therefore will confine the discussion to the semantics of affixation. Lieber () suggests that there are a number of theoretical issues that must be addressed in a theory of the semantics of affixation. First and foremost is the need to have a system of representation that is broadly cross-categorial, in the sense that it can account equally well for the semantic properties of nouns, verbs, adjectives, and adverbs, in other words, all open lexical categories. Second, it must be able to characterize the semantics not only of category-changing derivation but also non-category-changing derivation. And finally, it must account for two important properties that appear frequently in derivation. First, affixes are often polysemous, in the sense that a given affix can display a number of more or less related meanings: the classic example is that of English ‑er which can form not only agent and instrument nouns (driver, computer), but also stimulus (thriller), location (diner), quantity (fiver), means (stroller), and even patient (loaner) nouns. Second, languages frequently have several affixes that display the same meaning, 9

Bauer, Lieber, and Plag () prefer to characterize this behavior in terms of output conditions on affixed words rather than on the linear application of phonological rules.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

and indeed the same range of polysemy (e.g. ‑ize and ‑ify in English). The relationship between form and meaning in affixation is often not one-to-one. The question, then, is how to characterize the meanings that can be conveyed by affixes or other word formation processes such that they explain these properties. It has been common in the literature on morphology to assume that the meaning of lexical items is divided between a relatively more formal part that is relevant to syntax, variously called event structure (Rappaport Hovav and Levin , ), grammatical semantic structure (Mohanan and Mohanan ), argument structure (Pustejovsky ; Baeskow a), or the semantic skeleton (Lieber ), and a less formal part that contains those encyclopedic elements of meaning that are of no syntactic relevance. These aspects of meaning have been called the constant (Rappaport Hovav and Levin), the encyclopedia (Harley and Noyer ), or the semantic body (Lieber ). Most theorists construct the formal part of the lexical semantic representation as a hierarchically arranged system of functions and their arguments. Where theoretical treatments differ is in the explicitness with which the functions themselves are articulated. Lieber () provides the most detail in this regard, proposing a number of binary and privative features that can characterize lexical semantic skeletons as well as a principle of coindexation by which the arguments of affixal skeletons can be integrated with those of their bases. Affixal polysemy arises in this system by virtue of the relatively underspecified semantics of affixal skeletons as well as from their general lack of encyclopedic content. The existence of multiple affixes with the same function follows from the small semantic space circumscribed by the featural system. Baeskow (a, b) begins to develop a similar system, but makes use of a somewhat different set of features in her formal representations of affixal meanings; she adds as well the notion of prototypes and of qualia structure (Barsalou ; Pustejovsky ) to account for affixal polysemy. A somewhat different tack is taken in what Baeskow (a, b) refers to as Neo-Constructionist models, by which she means both DM and Borer’s (a, b) Exo-skeletal model of derivation. In Neo-Constructionist models the formal aspects of meaning are presumably represented by the functional projections of the syntactic tree, with encyclopedic elements of meaning being relegated to so-called Vocabulary Items. As mentioned in §.. above, this is an aspect of DM that has been left largely underdeveloped, however, so it is as yet unclear to what extent DM will be capable of treating the semantics of word formation adequately. Borer’s () analysis of argument structure versus referential nominalizations includes explicit discussion of the functional projections that determine the interpretion of those nominalizations, but does not yet offer an overall account of the functional projections that would be necessary to characterize a wider range of derivational affixes.

.. Compounding Any treatment of compound semantics needs to account for a number of generalizations: • Headedness: compounds are frequently (although apparently not always) headed in the sense that one of the compound constituents determines both the syntactic category and the semantic type of the resulting compound.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



• Some compounds receive interpretations in which one constituent is construed as an argument of the other. In the tradition of English morphology, these are typically referred to as synthetic compounds, but as this term is not entirely applicable crosslinguistically, I will use the term argumental compounds instead (see also Bauer, Lieber, and Plag ). • In non-argumental compounds the semantic relationship between the first and second compound constituent is largely unpredictable. It would be impossible in the small space of this section to do justice to the extensive literature on the interpretation of compounds, so I will focus here on two issues that have occupied the attention of theorists in the last several decades. An excellent general overview of the topic can be found in ten Hacken (). The first issue is the extent to which the relationship between the head and non-head constituents of non-argumental compounds is determinate and therefore a matter to be modeled in our grammar. In early treatments of compounding, for example Lees () and Levi (), the attempt was made to treat the semantics of non-argumental compounds as fully determinate, and specifically to characterize compound meaning using a fixed list of semantic functions like  ,  ,  ,  , and so on. There are several problems with such treatments. First, it is impossible to arrive at a constrained, finite list of functions, as a potentially infinite number of relationships between compound constituents can be found. Second, for novel compounds at least, it is impossible to determine in advance which of a number of potential interpretations will be associated with the compound; that is, we have no way of knowing whether the function involved in the derivation of the compound rabbit blanket would be ‘used by’ or ‘made of ’ or any other plausible predicate. Finally, in analyses in which the semantic function determining the relationship between compound constituents is deleted by transformation, the rules involved must be unappealingly powerful. The majority of theorists have therefore taken the position that there is no grammatical or semantic rule that generates compound interpretation in non-argumental compounds. An exception in this regard is Jackendoff (a), which is a relatively recent attempt to treat compound semantics as a result of a number of semantic schemas; Jackendoff ’s Parallel Architecture account is far more successful than early attempts at modeling the semantics of non-argumental compounds, however, in that his theory allows for the free generation of new schemas. Another approach is that of Štekauer (b), who examines in an experimental study the probability with which speakers associate a novel compound with one meaning or another. The second matter that has received repeated attention over the years is the mechanism by which the non-head of an argumental compound receives its interpretation. The claim is typically made that the non-head constituent in argumental compounds is generally to be interpreted as the internal argument of the verb from which the head constituent is derived (e.g. truck driver), or possibly as an adjunct (pan frying), but never as an external argument (i.e. child eating is only possible if it involves cannibalism). The earliest accounts of argumental compounds are purely syntactic (Lees ). Roeper and Siegel () provide the first treatment in the lexicalist tradition, proposing a so-called lexical transformation adhering to what they name the First Sister Principle, their way of encoding the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

restriction on the interpretation of the non-head constituent of argumental compounds to non-external arguments. Selkirk’s () First Order Projection Condition and Lieber’s () Argument Linking Principle do largely the same work in frameworks that do not allow lexical transformations. A return to purely syntactic analyses of argumental compounds can be found in the neo-transformational analyses of Roeper () and Lieber (), following the work of Baker () on noun incorporation, as well as more recently in the DM account of Harley (). Interestingly, over this period, although the dispute over the formal mechanism that gives rise to the interpretation of non-argumental compounds has been vigorous, the range of facts on which these analyses have been based has largely remained the same. It is only with the corpus-based work of recent years that a wider range of facts has been noted that potentially calls these treatments into question. That is, it appears that, contrary to received wisdom, quite a few argumental compounds permit a reading in which the non-head can be interpreted as the external argument of the verbal base of the head. Lieber (, a, ) points out the existence of compounds like city employee in which the first constituent by necessity must be interpreted as the external argument of employ, but the existence of compounds like court desegregation, census enumeration, community exploration, or eyewitness misidentification in which context makes it clear that the first constituent is to be interpreted as external argument suggests that the analysis of argumental compounds faces challenges that have yet to be met.10

. O  

.................................................................................................................................. In the sections above we have looked at what might be considered ‘big picture’ issues that have been prominent in the theoretical modeling of word formation processes over the last four decades. Here we will look at a variety of issues that arise independently of the formal mechanisms embraced by a theory or the positioning of those mechanisms relative to other components of the grammar. I include here such issues as the applicability of the notion of ‘headedness’ to word formation, the characterization of productivity and blocking, the mechanisms that determine the ordering of affixes, so-called ‘bracketing paradoxes’, and the applicability of the notion of ‘paradigm’ to word formation.

.. Headedness The notion of ‘head’ is a familiar one in syntactic theory: the head is typically defined as the item that determines the category and distribution of the phrase as a whole. Zwicky (c), however, points out that in syntactic phrases there are a number of characteristics that contribute to the definition of head, among them that the head is the element that subcategorizes for or governs other items in the phrase, the element that determines agreement or concord, and the element that determines the semantic type of the whole. Starting with Williams (), the notion of head has been applied within morphology as 10

All examples from the Corpus of Contemporary American English (COCA).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



well, although its utility has been the subject of substantial controversy (Zwicky c; Hudson ; Scalise ; Bauer ; Haspelmath ; Štekauer b; Scalise and Fábregas ). Williams (: ) defines the head as the element that “has the same properties” as the word as a whole. Among those properties that he mentions explicitly are category, inflectional class (which we will not be concerned with here), and lexical features, among which he counts features like [+latinate].11 The chief difficulty with applying the notion of head in complex words is that it only corresponds in part with the notion as it is used in syntax. As Zwicky (c) points out, with respect to compounds one element (the righthand one in English) determines both the category, morphosyntactic features and semantic type of the complex word, although the element that is head in that sense cannot be said in any meaningful way to select for or subcategorize the non-head element of the compound. On the other hand some affixes do seem to select their bases, as well as determine the category and morphosyntactic features of the complex word, but cannot be said to determine semantic type, at least not in the same way we understand this with regard to phrases or compounds. Closer scrutiny complicates the picture with respect to affixation in that subcategorization or selection can be a two-way matter, with affixes selecting specific sorts of bases, but also bases selecting specific affixes (Bauer ; Giegerich ). Further, the existence of exocentricity in compounding—that is, compounds that either have no clear head (e.g. argumental compounds like pickpocket or coordinate compounds like parent–teacher in a parent–teacher conference) or possibly two heads (e.g. coordinate compounds like blue-green)—strengthens the conclusion that the concept of head in word formation, whatever its utility, cannot be the same as it is in syntax. Again, it is worth asking what is at stake in this debate. In terms of the morphological theory of the s and s, what was at stake was the legitimacy of modeling word formation rules on analogy to syntactic rules: if ‘head’ is an important notion in syntax, of necessity it must be in morphology, and more particularly, we would expect the notions to coincide. Insofar as some current theories of morphology continue in this tradition (see §..), the problem of determining what we mean by the head of a complex word still remains. But in other frameworks, it is not clear that much is at issue, in the sense that ‘head’ may be defined independently for morphology and syntax, or dispensed with altogether in morphology. For example, in a recent assessment of the notion of headedness in the framework of Construction Morphology, Arcodia () notes that headedness in word formation is not a phenomenon per se, but something that emerges more or less strongly as a higher-level generalization across schemas. Indeed nothing would be missed in the theory if the notion of head were dispensed with altogether.

.. Productivity and blocking Another issue that is largely independent of and orthogonal to major theoretical debates concerns two interlocking concepts: productivity and blocking. Productivity is concerned 11 Williams also proposes what he calls the Righthand Head Rule (RHR), which determines the rightmost element in a complex word to be the head. I will not discuss the RHR here, as it is clear that from a cross-linguistic perspective it cannot be correct.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

with the ease with which new words can be formed using various word formation processes. Blocking is concerned with the purported tendency of an existing word with one meaning or function to preclude the coining of another word with the same meaning or function. While the definition of productivity has not been hugely controversial, how to measure productivity has been more so, and especially perplexing has been the question of what blocking is, or even whether it exists at all. Discussions of productivity often start with inflection, as inflection typically presents a relatively clear pattern. Inflectional processes tend to be highly productive in the sense that every noun will have some sort of plural and every verb some sort of past tense in a language which inflects for those distinctions. Further, the existence of one inflected form, say, an irregular plural like mice or feet, will typically preclude the existence of a regular form (*mouses, *foots). In contrast, derivation presents a much less neat picture and indeed necessitates a closer look at these concepts. Unlike inflection, derivational processes can differ widely in their productivity. Although some processes may be every bit as productive as inflectional processes (e.g. suffixation of ‑ness in English), others may show only moderate productivity (e.g. ‑ity or ‑ment), and others might be entirely unproductive (e.g. ‑th). Bauer (), following Corbin (), teases apart two strands in the notion of productivity. First, we can treat word formation processes in a binary fashion as being either available or unavailable. Then, for those that are available, we can focus on the degree to which they can be used, so that word formation processes may be more or less profitable. In other words, word formation processes either give rise to new words or they don’t (availability) and if they do, may do so to a greater or lesser extent (profitability). How to measure productivity in more than an impressionistic way is a dicier matter. The productivity of a word formation process cannot be assessed simply by counting types in a dictionary, for example, as the point of measuring productivity is not to see how many words there already are with a particular affix, but how many words in principle there could be. Further, word formation processes frequently have restrictions of various sorts. Some affixes may attach only to bases of specific syntactic and semantic categories or to bases that have a particular segmental or prosodic shape. One might think that the more restrictions that accrue to a process, the less productive it is likely to be. But as Aronoff () points out, perhaps what we should be concerned about in assessing productivity is not how restricted the pool of bases is for a given affix, but the extent to which an affix is found to attach to those bases that meet the relevant restrictions. Another way of looking at the productivity of an affix is to observe frequency patterns of derivatives with that affix in a corpus. As Baayen (, ) suggests, the more productive a process the higher the proportion of low frequency formations we expect to find formed by that process, in a corpus; he argues that the proportion of hapaxes (defined as items with a frequency of one) in a corpus formed with a particular affix to the overall number of tokens formed with that affix can give a rough measure of productivity in this sense. Further, the higher the proportion of low frequency items, the likelier we are to find a process semantically transparent. In other words, there is an inverse correlation between semantic compositionality and frequency: the more high frequency items are to be found with an affix, the higher the level of lexicalization they display. Intimately intertwined with the notion of productivity is the notion of blocking. Aronoff (: ) defines blocking as “the nonoccurrence of one form due to the simple existence of

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



another”. The example he gives is that simplex forms like glory or fury seem to block forms like gloriosity and furiosity.12 The assumption that seems to lie behind blocking is that for items that are listed (as opposed to coined on line), there is only one semantic slot to fill in the mental lexicon. Once a slot is filled, another form cannot be coined to fill that slot; so furiosity is blocked by fury, but furiousness can be formed, as long as we assume that the output of highly productive word formation processes is not stored (an assumption that has been called into question in the psycholinguistic literature; see, e.g., de Vaan, Schreuder, and Baayen ). Much of the discussion of blocking (e.g. Kiparsky ; van Marle ; Rainer ; Plag ; Giegerich ) is devoted to discussing the precise mechanisms that account for the phenomenon, trying to make blocking follow from other aspects of the grammar—say stratification or the Elsewhere Condition—or from functional principles like the avoidance of synonymy. But such theorizing presupposes that speakers actually do avoid synonymy and therefore that blocking actually is a phenomenon in need of explanation. I would argue that it is not. Bauer, Lieber, and Plag (: ) note that there are two sorts of problems with the idea of blocking. First, in this age of massive corpora it is impossible in principle to make clearcut judgments on the occurrence or non-occurrence of particular forms; it is dangerous to rely on our intuition for such judgments, as words that might seem implausible or downright bad can often be found attested in a large enough corpus, and indeed can seem entirely unremarkable in context. Second, if the function of blocking is to avoid synonymy, we need to be clear on what we mean by synonymy if the concept is to be useful. If items cannot be deemed synonymous unless they are identical not only in denotation, connotation, and register, but also must be used by all members of a speech community (according to gender, socioeconomic status, and so on) at a specific moment in time, we would have a hard time proving any pair of words ever to be synonymous. If we stick to denotation, connotation, and register alone, however, the concept of blocking turns out to be chimerical. A prime example might be the cases of English ‑ity and ‑ness. Riddle () has argued that although doublets occur with these affixes, there is always a subtle difference in meaning between the items in the pair. She points to examples like hyperactivity, which is a medical condition, as opposed to hyperactiveness, which is merely a property that can be attributed to an individual without implying a diagnosis. More generally, she suggests that if doublets appear with these two suffixes, the ‑ity form will always denote something that is more specialized or reified than the ‑ness form. But as Bauer, Lieber, and Plag (: –) point out, it is not difficult to find pairs like purity and pureness, exclusivity and exclusiveness, passivity and passiveness that are attested in COCA in contexts that reveal no semantic distinction. Similar examples can be found in affixes that derive adjectives from nouns (‑al, ‑ic, ‑esque, ‑y) and even among negative prefixes (in-, un-, de-, dis-, a-) or affixes that form event/result nouns from verbs (‑ation, ‑al, ‑ment, ‑ure, etc.). Indeed what we find

12 Rainer () distinguishes between what he calls token blocking, also called synonymy blocking elsewhere in the literature (Giegerich ), and type blocking. The former is the sort of blocking that Aronoff defines. The latter involves cases where affixes that have the same function (say the suffixes ‑al and ‑ment, both of which form nouns from verbs) preclude each other. Plag () argues that this should not be treated as blocking, but rather is a matter of competition between rival affixes, which is frequently governed by complementary selectional properties of those affixes.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

is that derivational doublets abound, some of which reveal distinctions in meaning, but many of which do not. Data such as these point to the conclusion that blocking per se does not exist, and that our intuitions to the contrary must be the result of ease of lexical access in the mental lexicon, which in turn is linked to frequency effects.

.. Affix ordering Another issue that is in principle independent of the choice of theoretical frameworks is that of accounting for the restrictions that can be observed on the ordering of affixes. To the extent to which the matter has been studied cross-linguistically several generalizations are agreed upon. First, it is normally the case that inflectional affixation occurs outside derivational affixation. Second, within derivation, it seems most often to be the case that the number of combinations of affixes that are attested is far smaller than would be predicted if just categorial selection were at stake; that is, if affix a attaches to adjectives and affixes b, c, and d form adjectives, we might expect a to be found on bases ending in any of b, c, or d. But this is rarely what we find. The question, then, is how to explain the restrictions that we do find. The earliest extended treatments of affix ordering phenomena in the generative tradition are in the framework of Lexical Phonology and Morphology, discussed in §.. above. Theoretically, any affix at Level  should attach to the output of both Level  and Level , assuming that the c-selectional properties of that affix are met. But as Fabb () points out, this is not at all the case. In English, far fewer combinations are attested, and Fabb argues that other principles must be at stake. His solution is to assume that ordering restrictions follow entirely from selectional properties of individual affixes. Aronoff and Fuhrhop (: –) argue that there can also be language-specific constraints on affix ordering: they suggest that German, for example, has a class of closing suffixes including ‑heit/‑keit, ‑in, ‑isch, among others, by which they mean suffixes that must be the last affix in a word, unless some sort of linking element is added outside them to facilitate further affixation (so schönheitslos ‘beautyless’ is possible, but not schönheitlos). For English they propose the constraint that only one Germanic suffix is possible in any derived word.13 Hay (, ) argues that affix ordering is largely driven by principles that are less formal and more psycholinguistic in nature, an approach that has been called by Plag (: ) “Complexity Based Ordering”. According to this principle, the ordering of affixes is a matter of their parseability, that is, the extent to which they can be easily separated from their bases, as well as the extent to which their effects are semantically transparent. More parseable affixes will occur outside less parseable ones. Crucially, parseability may differ for a given affix from one form to the next, so that, for example, the suffix ‑ment is less parseable in a word like government than in one like amusement. Work along these lines is pursued in Plag (), Hay and Plag (), Baayen and Plag (), Manova (), and Zirkel (), among others. For a much more complete overview of the subject of affix ordering, the reader is referred to Saarinen and Hay ().

13

As Bauer, Lieber, and Plag () show, however, data from corpora like COCA suggest that it is not unusual to find more than one Germanic suffix in complex words in English.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



.. Bracketing paradoxes The term ‘bracketing paradox’ is used to refer to several different kinds of structural mismatches that can be found in complex words; it is not a unitary phenomenon. On the one hand, we find cases where the arrangement of morphemes that is dictated by phonological restrictions on affixation is different from that implied by the semantic interpretation of the complex word: so, for example, the form unhappier must be structured as [un [[happy]er]] on the grounds that comparative -er in English does not typically attach to trisyllabic bases, but as [[un[happy]] er] on the basis of its meaning ‘more unhappy’. On the other, we find cases like ungrammaticality which are given the structure [un [[grammatical] ity]] in order to conform to the level-ordering of Lexical Phonology and Morphology, but where the semantic interpretation would dictate the structure [[un [grammatical]] ity]. Similar to the ungrammaticality cases are cases like transformational grammarian or nuclear physicist (Spencer ) in which semantics dictates that the compound be internal to the derivational affix, whereas derivation is typically assumed to be internal to compounding. What is important to note is that the two cases can be treated quite differently. Whether cases like ungrammaticality or transformational grammarian constitute bracketing paradoxes is a theory-internal matter. In theories that are not based on level-ordering, there is nothing that forces the structure in which un- is external to ‑ity. Similarly, if one’s theory allows affixation to apply to compounds, as the data in Bauer, Lieber, and Plag () suggest is necessary, only one structure is necessary for the transformational grammarian cases, namely the one that coincides with the semantic interpretation. Cases like unhappier are not so easily dismissed, however. As nicely summarized in Spencer (: ch. ), there have been a number of approaches to bracketing paradoxes in the literature, including principles of lexical relatedness (Williams ; Masini and Audring, Chapter  this volume), mapping (Sproat ), and movement (Pesetsky ). However, if we must only account for the unhappier class of cases, a simpler solution is available. If we assume that words have both prosodic structure and morphological structure, as seems likely on independent grounds, and that the two structures need not be isomorphic, we can adopt an approach in which so-called bracketing paradoxes are simply words that have non-isomorphic prosodic and structural analyses. Both Aronoff and Sridhar (, ) and Booij and Lieber () approach bracketing paradoxes in these terms.

.. Derivational paradigms I briefly take up one final theoretical issue here which has not been prominent in the literature, but which perhaps should be more so, that is, the question of whether the notion of ‘paradigm’ is of relevance with respect to word formation, as opposed to inflection. For a recent general overview of the topic, see Štekauer (). What we mean by ‘paradigm’ is a bit different with regard to derivation than with regard to inflection. In the latter case, we can think of a paradigm as the set of word forms that realize all relevant inflectional properties for a given lexeme (Bauer a: –). For derivation we are dealing by definition with sets of different lexemes, so what we mean by derivational paradigms are

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

sets of derived forms that share the same base. These are sometimes referred to as morphological families. We might ask whether there is any need for the derivational paradigm as a construct within a theory of word formation. Certainly the concept of morphological family has been shown to be a useful one in the psycholinguistic literature (Schreuder and Baayen ; Plag and Baayen ) in explaining results of reaction time experiments concerning derived forms. But even in purely theoretical terms the argument might be made for its utility as a theoretical construct. For example, there are pockets of derivation in English whose behavior is hard to explain in terms of independent rules because the primary relationship exhibited is not between the derived form and its base, but between two or more derived forms. Van Marle () notes that there are semantic relationships that obtain between derived words sharing a base that cannot be explained simply by reference to the shared base. For example, Bauer, Plag, and Lieber (, ch. ) observe that for every noun in ‑ism, there are potential derived forms in ‑ist and ‑istic. Interestingly, the ‑istic adjectives frequently are more closely related semantically to the ‑ism nouns than to the ‑ist nouns; so communistic is typically interpreted as meaning ‘pertaining to communism’ rather than as ‘pertaining to communists’. Another point of utility for the derivational paradigm is as a mechanism for explaining certain back formations. As Bauer () and Štekauer () point out, given a derivational paradigm like aggressive, aggressor, aggression . . . alongside active, actor, action, act, we are led to backform the verb aggress. In other words, the derivational paradigm supplies a frame in which local analogy can be said to operate. A third area of interest involves so called ‘splinters’, that is, pieces of words that do not constitute morphemes in any classic sense, but which nevertheless give rise to new forms. For example, in English items such as -holic (sexaholic, workaholic, chocoholic, yogaholic) or -rific (cheeserific, moisturific, Twitterific, splatterific) cannot be said to constitute bona fide affixes, but they nevertheless allow creation of new forms on analogy to established forms like alcoholic or terrific. It might be argued that items of this sort are too marginal for a theory of word formation to owe them any formal treatment, but we dismiss them at our peril. Each one of these may give rise to only a handful of new forms, but there are a lot of such splinters and until we analyze them we do not really know what they have to tell us about word formation.

. C

.................................................................................................................................. What can we conclude from this overview about the theory of word formation? First, I would argue that the nature of the formalism we use for modeling rules of word formation is perhaps not as critical as we once thought. Second, regardless of what we think about the formal nature of rules of morphology, the semantic characterization of morphemes (or morphological rules or functional projections that host morphemes) deserves much more attention than it has hitherto received. Most importantly, I suggest that we still have a lot to learn about the data that a theory of word formation should be responsible to. It goes without saying that theorists want to make the strongest possible claims that are consistent with the data, but we still have much to learn about what those data are, even for

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



well-studied languages like English, Dutch, or Italian. Finally, attention to what might seem like small matters—whether or not blocking is a real phenomenon, the analysis of affix order, the nature of derivational paradigms and apparent local analogy as a mechanism for word formation—all point to a significant difference between morphology and other components. It is overwhelmingly clear that word formation involves computation of some sort—that is, the generation of new forms. But it is also apparent that because already coined words may be stored, some of the patterns we find may be attributed to the effects of frequency and the concomitant vagaries of lexical access. This suggests that although word formation may be analogous to syntax or phonology in some ways, it is nevertheless a unique domain that must be studied on its own terms.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

                  ......................................................................................................................

 

I languages that exhibit inflectional morphology, inflection is at the center of the grammar, interfacing nontrivially with phonology (chief/chiefs/*chieves but thief/*thiefs/thieves), syntax (that dog/*dogs, those dogs/*dog), semantics (finite forms of the auxiliary verb have receive a perfect interpretation, but its infinitive form need not—*He has graduated in , but He may have graduated in ), and pragmatics (she is not here and she ain’t here are logically but not pragmatically equivalent). Given this centrality, the wide divergence among current theories of inflectional morphology is somewhat surprising. In this chapter, I discuss the fundamental points of disagreement among these theories.1 I distinguish two axes along which theories of inflectional morphology may be situated relative to one another (Stump ). The first is the lexical/inferential axis. In a lexical theory of inflectional morphology, inflectional morphemes are lexically listed, and are therefore subject to the same principles of lexical insertion as ordinary lexical morphemes.2 In a lexical theory, the Latin verb form laudāvistı ̄ ‘you (sg.) have praised’ arises through the insertion of the lexically listed morphemes laud-, ‑ā, -v, and ‑istı ̄ into a particular constituent structure. Inferential theories, by contrast, rely on rules to infer inflectionally complex word forms from more basic stems or from other word forms; inflectional morphemes are not listed in the lexicon, but are the mark of a particular step in the inference of a complex word form. Such inferences may be stem-based or word-based: for example, laudāvistı ̄ might be deduced from more basic stems through a chain of inferences: laud- ! laudā- ! laudāv- ! laudāvistı ;̄ alternatively, laudāvistı ̄ might be inferred from the contrasting word form laudāvı ̄ ‘I have praised’, which implies (and is implied by) laudāvistı .̄ In lexical theories, the structure of inflected word forms is defined by independently motivated principles of syntactic structure (either directly by means of phrase structure rules or transformationally, for example through multiple applications of head movement); such theories are “syntactocentric”. In inferential theories, by contrast, inflectional

1

I thank Jenny Audring and two anonymous referees for helpful suggestions during the preparation of this chapter. 2 The term ‘lexical’ as used here should not be confused with the term ‘lexicalist’; see Montermini (Chapter  this volume) for discussion of lexicalism in the latter sense.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



markings are not lexically listed, but are defined by rules that allow inflectionally complex word forms to be inferred either from more basic stems or from related word forms; lexical insertion applies not to individual morphemes, but to whole words. Because the inferential rules that are assumed to define fully inflected word forms have no analogue in syntax, such theories entail that morphology is an autonomous grammatical component rather than an expression of syntax; inferential theories are therefore “morphocentric”. The lexical/inferential axis is orthogonal to the incremental/realizational axis. In an incremental theory, each inflectional morpheme is associated with a particular morphosyntactic content—in the lexicon, if the theory is lexical, and in a rule of inference, if the theory is inferential—and each complex combination of morphemes acquires its morphosyntactic properties cumulatively, through the combination of the morphosyntactic properties of the individual inflectional morphemes of which it is composed. In a theory of this sort, laudāvistı ̄ acquires the properties ‘second person’ and ‘singular’ through the lexical insertion of the agreement suffix ‑istı ̄ or by means of a rule inferring laudāvistı ,̄ either from the perfective stem laudāv- or from a related word form (e.g. laudāvı ̄). Thus, in an incremental approach to inflection, a word form’s morphosyntactic content is supplied in steps. In a realizational theory, a word form’s association with a particular set of morphosyntactic properties logically precedes the expression of those properties by particular inflectional markings: it is precisely this association that determines the lexical insertion of its affixes (if the theory is lexical) or determines the rules by which it is inferred from a stem or related word form (if the theory is inferential). In such a theory, the association h̄ , { sg present perfective indicative active}⟩ licenses either the lexical insertion of the morphemes laud-, ‑ā, ‑v, and ‑istı ,̄ or the stem-based chain of inferences laud- ! laudā! laudāv- ! laudāvistı ,̄ or the word-based inference of laudāvistı ̄ from (say) laudāvı .̄ Thus, in a realizational approach to inflection, a language’s grammar specifies the sets of properties with which a lexeme L may be associated, and for each such property set σ, the morphology of the language defines the word form realizing the pairing hL, σ⟩. The distinction between incremental and realizational theories of inflectional morphology cross-cuts the distinction between lexical and inferential theories, defining a typology of four broadly different kinds of theories of inflectional morphology (Stump : –). One way of seeing the differences among these four kinds of theories is by reference to the pretheoretic assumption that inflectional morphology is morphology that expresses the association of a word form w with a morphosyntactic property set σ: each of the four kinds of theories makes different assumptions about how such associations arise. In a lexical-incremental theory (such as that of Lieber ), the association of w with σ arises in syntax, where the form of w and its morphosyntactic content are assembled in parallel by the lexical insertion of w’s root and of the individual morphemes expressing the properties in σ. In a lexical-realizational theory (e.g. Halle and Marantz ), the association of w with σ also arises in syntax, where the functional heads bearing the various properties in σ are gathered under a single lexical node into which w’s root is inserted along with the inflectional morphemes whose lexically specified content matches one or more of the properties in σ. In an inferential-incremental theory (e.g. Steele ), the association of w and σ is defined morphologically: w acquires the properties in σ only as a concomitant of its inference from one or more other forms, whether these be more basic stems or related word forms. In an inferential-realizational theory (e.g. Matthews ; Anderson ; Stump ), the association of w with σ is again defined morphologically. The word form w is seen as

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

realizing a cell hL, σi in the inflectional paradigm of a lexeme L. If the realization of this cell by w is not simply stipulated, then the form of w is inferred from hL, σi, either as a function of the stem of L appropriate for expressing σ or as a function of the word form realizing some other particular cell in L’s paradigm. At a finer level of analysis, each of these four kinds of theories has a heterogeneous range of possible instantiations. For instance, a lexical-realizational theory might define the hierarchical structure of a complex inflected word form by means of phrase-structure rules that apply below the word level; or it might instead derive this structure transformationally, by means of head movement. Thus, the current theoretical landscape in inflectional morphology involves several dimensions of divergence. These dimensions can be pieced apart by considering five general questions: • What are the basic units in terms of which a language’s inflectional morphology is defined? • What sorts of structures does a language’s inflectional morphology define? • What is the relation between concatenative and nonconcatenative inflectional morphology? • How is the relation between a word form’s morphosyntactic properties and its inflectional exponents defined? • What distinguishes inflectional morphology from other kinds of morphology? Different theories of inflectional morphology answer these questions (or endeavor to do so) in different ways.

. W   ?

.................................................................................................................................. Before proceeding, it will be useful to stake out the domain of discussion. Throughout, I regard inflectional morphology as morphology whose function is to express the association of a lexeme with a particular morphosyntactic property set. Seen in this way, inflectional morphology is logically bound to the notions ‘lexeme’ and ‘morphosyntactic property’. Like phonemes in phonology, lexemes are a theoretical abstraction expressed by more concrete entities: just as a phoneme is typically expressed by different speech sounds in different phonological contexts, a lexeme is typically expressed by different word forms in different syntactic contexts (as the English lexeme  is expressed by the word forms go, goes, went, gone, and going in the contexts in ()). A lexeme possesses the characteristics common to the word forms that realize it: their syntactic category, inflection-class membership, shared lexical meaning, and association with a particular stem inventory; a lexeme may likewise be seen as possessing the paradigm of word forms that realize it. ()

a. b. c. d. e.

She must go home. Whenever she goes home, she takes the bus. She got up and went home. She has gone home. She is going home.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



A morphosyntactic property is a grammatical property to which syntax and morphology are both sensitive. In syntax, constituents carry sets of morphosyntactic properties that condition their combination with other constituents. A modifying constituent may have to share certain properties with the constituent that it modifies; a predicate may have to share certain properties of one or more of its arguments; an anaphor may have to share certain properties of its antecedent; and the head of a phrase may require its complement to carry a particular property. In morphology, a lexeme’s word forms differ in content because they carry distinct morphosyntactic property sets and they differ in form because of the different ways in which these property sets are realized morphologically. A morphosyntactic property is one of a set of contrasting values in some inflectional category:3 for example, the morphosyntactic properties ‘singular’ and ‘plural’ are the contrasting values in the English inflectional category of number. A morphosyntactic property often has a regular semantic interpretation, but this is not invariably so; for instance, gender properties often exhibit a high degree of semantic arbitrariness. The morphological expression of a morphosyntactic property (or set of such properties) is the exponent of that property or property set; thus, the exponents of past tense in the respective verb forms walked and sang are the suffix -ed and the ablaut substitution /ɪ/!/æ/. Consider now the divergent approaches to modeling the morphological expression of a lexeme’s association with a particular morphosyntactic property set, with particular attention to the five fundamental issues.

. W          ’    ?

.................................................................................................................................. The study of morphology is predicated on the observation that although some word forms are arbitrary pairings of a phonological form with some semantic or grammatical content, other word forms are signs in which the pairing of form with content is not entirely arbitrary, either because they are analyzable into smaller content-expressing formatives or because they are analyzable as arising from simpler signs through the application of one or more content-expressing operations. That is, morphology is based on the observation that word forms may be related to one another. What sorts of units does this relatedness entail? In discussing the parts of a word form’s structure, it is customary to distinguish segmentable morphological units (e.g. word forms, roots, stems, affixes) from morphological operations (e.g. affixation, ablaut, reduplication). Morphological units and morphological operations both serve to express a particular meaning or mark a particular grammatical

3 I favor the term ‘inflectional category’ over ‘feature’; the latter term tends to be used imprecisely, sometimes referring to true inflectional categories (e.g. tense) and sometimes to specific morphosyntactic properties (e.g. past tense).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

function; a morphological unit is said to be minimal if it cannot be segmented into smaller morphological units.4 The relation between morphological units and morphological operations is complex. Some morphological units are invariably associated with some morphological operation, as the past-tense marker ‑ed is invariably introduced by suffixation. But morphological operations do not always introduce morphological units: some change existing morphological units (e.g. the operation that relates the singular noun stems wife-, house- /haʊs/, and wreath- /ɹiθ/ to their plural counterparts wive-, /haʊz-/, and /ɹið-/) while others delete existing units (as in the Uto-Aztecan language Tohono O’odham, where a verb’s perfective forms arise from their imperfective counterparts through the deletion of a final consonant [e.g. ‘shoot’, imperfective singular gatwid, perfective singular gatwi; ‘peel’, imperfective singular ’elpig, perfective singular ’elpi]; see Anderson : , citing Zepeda ). The proper analysis of morphological units that are associated with morphological operations is a matter of debate. On one view, such morphological units are lexically listed, and their association with a particular morphological operation is represented as a subcategorization restriction comparable to the restriction between a head and its complement in syntax. An alternative view is that such morphological units have no grammatical reality apart from the morphological operation that introduces them, so that the suffixation of ‑ed in walked and the ablaut operation defining the verb form clung have fundamentally the same status in the morphology of English. Morphological operations are of various kinds; these are discussed in detail in §.. Morphological units are likewise of various kinds. Free morphological units may appear as independent word forms; bound morphological units do not. A bound morphological unit that is introduced by a morphological operation is an affix. Affixes are generally distinguished by the kind of morphological operation that introduces them (as prefixes, suffixes, and infixes); but not all affixes are invariably associated with a single morphological operation. In the inflection of Swahili verbs, for example, relative affixes are prefixed by default (as in (a)), but suffixed in tenseless affirmative verb forms (as in (b)); see Ashton : ff.; Stump b.5 ()

a. a–na–vyo–vi–soma :.m––:.vi–:.vi–read ‘(books that s/he) is reading’

b. a–vi–taka–vyo :.m–:.vi–want–:.vi ‘(books that s/he) wants’

Word forms are morphological units that function as minimal free forms in syntax. Any morphological unit to which a morphological operation applies is a stem. (Minimal stems are commonly referred to as roots.) Stems and word forms cannot, in general, be equated; for example, the wive- in wives is a stem but not a word form. Nevertheless, there are 4

Here, I follow tradition in assuming that a morphological segmentation is a division that coincides with a boundary between two contiguous phonological segments. Thus, wives /waıvz/ is segmentable into /waıv/ and /z/, but /waıv/ is not segmentable into /waıf/ and [+voice]. (This is not, of course, to say that wive- should not be analyzed into these parts, only that this analysis does not conform to the pretheoretic notion of segmentation.) 5 In (), I gloss a given noun class as .x, where x is the noun-class prefix carried by nouns belonging to that class. For example, .vi glosses the noun class to which the plural form vi-tabu ‘books’ belongs.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



morphological units that may function both as word forms and as stems, and this fact makes it necessary to exercise some terminological and conceptual care. Consider, for example, the morphological unit call. This may function as a word form, as in They call you, and it may also function as a stem, as in They are calling you. Although call may function both as a word form and as a stem, it does different work in these two functions. As a word in the sentence They call you, call expresses tense and mood, and may be seen as realizing a paradigm cell; by contrast, as a stem in the sentence They are calling you, call expresses neither tense nor mood, nor can it be seen as the (full) realization of a paradigm cell. By virtue of their identity in form and their association with the same verbal lexeme , we can refer to call as the same morphological unit in both They call you and They are calling you; but we must nevertheless acknowledge the distinct functions that call serves in these sentences. Is it free or bound? One might say that it is simply free, because at least one of its uses is as a word form; but one could just as well say that it is sometimes free (They call you) and sometimes bound (They are calling you). Either way, one must acknowledge that different instances of call play different grammatical roles, and that in view of this fact, the free/bound distinction—however it is defined—may obscure as much as it reveals. Such morphological units as affixes, roots, and word forms are uncontroversially assumed in all theories of inflectional morphology. But there is controversy about the internal structure of complex morphological units (which we address in §.) as well as about the status of minimal morphological units. Minimal morphological units are commonly referred to as morphemes, though this term has actually been used in at least two distinct ways, and not always consistently. In one sense (Bloomfield ), a morpheme is a minimal meaningful form; in the other (Hockett ), it is an abstract unit of grammatical analysis realized by a morph (a minimal meaningful form) or by an array of morphs in complementary distribution. In the former sense, the -s in dogs and the -en in oxen are distinct morphemes; in the latter sense, they are distinct morphs realizing the same morpheme. In either sense, the morpheme is a controversial concept because of certain ancillary assumptions that have come to be associated with it, including those in (). ()

Assumptions about morphemes a. A language’s morphemes are lexically listed. b. A given word form w can be exhaustively segmented into a sequence of morphemes ⟨m1, . . . , mn⟩ such that the meaning of w is a compositional combination of the meanings of m1, . . . , mn and their hierarchical organization.

Assumption (a) is the defining feature of the lexical conception of inflectional morphology, according to which an inflected word form’s morphemes are assembled in syntax by insertion from the lexicon. For example, (a) implies that the five morphemes of Turkish adam-lar-ım-ız-dan ‘from our men’ [literally: man--...-] are inserted separately into syntactic structure. But there are at least two alternatives to this morpheme-level view of lexical insertion. One is that of lexeme insertion, according to which the lexeme  ‘man’ is inserted into a nominal node bearing the property set {plural ablative possessor:{pl}}, and that this lexeme/property-set pairing is subsequently realized as adamlarımızdan by an autonomous morphological component. Another is that of word-form insertion, according to which the paradigm of  is generated by an autonomous morphology and that the word form adamlarımızdan is inserted

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

from the {plural ablative possessor:{pl}} cell in this paradigm into a nominal node bearing a nondistinct property set. These latter two views of lexical insertion imply an inferential conception of inflectional morphology—a conception that is compatible with the lexical integrity hypothesis (Bresnan and Mchombo : ), which may be stated as in (), and to which we return in §.. ()

The lexical integrity hypothesis The structure of a language’s individual word forms is defined separately from its syntax, and syntactic principles are insensitive to a word form’s internal morphology.

Inferential theories of inflectional morphology have tended to abandon any reference to morphemes for at least two reasons. First, ‘morpheme’ refers to things that do not obviously belong together—a fact attested to by the absence of any rule or principle that applies to all of a language’s morphemes (or to all of its morphemes that are disyllabic, or word-final, or phrase-initial, or whatever); for example, there is no principle of English grammar that applies to the morpheme dog and to the plural morpheme shared by dog-s, ox-en, and geese but not to such non-morphemes as dogs, my dog, and the initial stress of Rover. Second, the notion that a word form’s content is a function of its component morphemes (= assumption (b)) is untenable, as we will see in §.. Summarizing, current morphological theories agree on many of the basic units in terms of which they define a language’s inflectional morphology: these include units of content (syntactic categories, morphosyntactic properties) and units of form (affixes, roots, word forms). But certain units are contentious: some accord theoretical status to morphemes as the basic units of content and lexical insertion, while others reject morphemes in favor of lexemes or word forms as the units of lexical insertion, relying on principles other than morpheme composition to determine a word form’s content.

. W      ’   ?

.................................................................................................................................. Morphological units exhibit two kinds of structure: formal structure (the internal morphology of the individual units themselves) and relational structure (the network of paradigmatic oppositions in which the units participate). Two morphological units may be alike in their formal structure but differ in their relational structure; for instance, the word form call in They call often and the stem call- in They are calling now are identical in their internal structure but different in their relational structure: in They call often, call stands in paradigmatic opposition to other word forms (e.g. called, talk, talked), while in They are calling now, call- stands in paradigmatic opposition only to other stems (e.g. talk‑). Conversely, two morphological units may be alike in their relational structure but differ in their formal structure: for example, the word forms dreamed and dreamt are identical in their relational structure, but differ formally, since dreamed (like seemed) involves an unmodified stem and the ‑ed suffix, while dreamt (like meant) involves a modified stem (/dɹɛm/) and the ‑t suffix. Theories of inflectional morphology differ widely in the status that they accord to these two kinds of structures.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



.. Amorphousness A common premise in morphology is that complex stems and word forms have an internal hierarchical structure comparable to that of a complex phrase in syntax. The Latin verb form laudābantur ‘they were being praised’ might be seen as possessing the hierarchical structure in Figure ., the result of several successive instances of head movement. This structure is similar to the structure of phrases in syntax; there is, however, a clear difference. Whereas syntactic principles of movement, ellipsis, and anaphora are sensitive to the internal structure of a complex syntactic expression, morphological principles are generally insensitive to the internal structure of a complex morphological expression (except to the extent that they are sensitive to its internal phonological structure, e.g. the structure in Figure .). Thus, even if the morphological definition of laudābantur involves the steps in Figure ., there is no compelling reason to assume that the verb form itself has any internal hierarchical structure other than the phonological structure in Figure .. This is, in effect, the amorphousness hypothesis (), most extensively justified by Anderson (); cf. also Janda (). () The amorphousness hypothesis A word’s form is nothing more than its phonological form. The amorphousness hypothesis is not universally accepted; proponents of lexical theories of inflectional morphology generally reject it, as do proponents of some realizational theories. Various kinds of arguments have been advanced in support of the idea that complex word forms have internal morphological structure (as distinct from “steps” in

Vce Agr Tns

-ur

Agr Tns

V V

Vce

-nt

-ba



laud

 .. The hierarchical structure of laudābantur after successive head movement

Word Foot(w)

Foot(s)

σ(s)

σ(w)

σ(s)

σ(w)

lau



ban

tur

 .. The prosodic structure of laudābantur

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  laudlaudālaudābalaudābantlaudābantur

 .. Steps in the morphological definition of laudābantur

their formal definition). Central among these6 is the argument that a word form’s semantic interpretation depends on the bracketing of its internal parts (Pesetsky ; Marantz ; Sproat , ), in accordance with (b). But the notion that the interpretation of word forms is compositionally determined by their word-internal structure is not tenable, for at least three reasons (Stump , : ch. ).

... First reason: a word form’s morphology may underdetermine its content The English word form cut may be infinitive (I will cut the rope), present indicative (I always cut the rope or untie it), past indicative (I always cut the rope or untied it), imperative (Cut the rope!), participial (I have cut the rope), subjunctive (They will require that he cut the rope), or irrealis (If he cut the rope, the balloon would float away). The only way to reconcile these facts with assumption (b) is to postulate zero morphemes expressing the inflectional categories that cut fails to express overtly in its various uses. In theories that rely on the heavy postulation of zero morphemes, the boundary between formal structure and relational structure is blurred: zero morphemes make it possible for the dimensions of a word form’s paradigmatic oppositions to be covertly represented as part of its structure. While this may seem like a step in the direction of greater theoretical parsimony, a theory of inflectional morphology that countenances the free postulation of zero morphemes is actually much less restricted than one that does not. A theory of the former sort allows a form such as sang to be analyzed as (i) sing+ø₁, where the zero morpheme ø₁ expresses past tense and triggers the stem readjustment i ! a; as (ii) sang +ø₂, where the lexically listed morpheme ø₂ expresses past tense and selects the stem form sang; as (iii) an expression of either sing+ø₁+ø₃ or sing+ø₁+ø₄, where the zero morphemes

6 Other arguments are (i) that word forms are headed (Lieber ; Williams ; Kiparsky a; Selkirk ); (ii) that transformational operations may convert separate syntactic constituents into single word forms (Sadock , ; Baker ); and (iii) that the application of a morphological rule to a stem form may be sensitive to its internal structure, for example to whether it ends with a “level-” affix (Kiparsky a). None of these arguments is decisive. Concerning (i), see Zwicky (c), Bauer (), and Stump (, : ch. ); concerning (ii), see Mithun (), Di Sciullo and Williams (), and Spencer (). Purported cases of type (iii) tend to involve sensitivity either to a stem’s membership in a particular morphological class or to its phonology.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



ø₃ and ø₄ express singular and plural number respectively, so that the combinations ø₁+ø₃ and ø₁+ø₄ are what trigger the suppletion of be+ø₁+ø₃ by was and that of be+ø₁+ø₄ by were but nevertheless allow sing+ø₁+ø₃ and sing+ø₁+ø₄ to share the default form sang; or as (iv) an expression of either sang+ø₂+ø₃ or sang+ø₂+ø₄. The fact that sang may also function as an irrealis form without past time reference requires the further postulation of an irrealis zero morpheme no matter which of (i)–(iv) is assumed. This variety of alternative analyses is, of course, a theoretical artifact. Moreover, the lexical listing of so many zero morphemes (the inflection of cut, for example, requires zero morphemes for present and past tense, singular and plural number, indicative, irrealis, imperative and subjunctive mood, and past participial and infinitival content) reflects a missed generalization in a theory of this sort, which portrays as sheer lexical accident the fact that, over and over again, default affixes are overridden by zero morphemes, so that time and again, zeroes are stipulated in the lexicon with no overarching account of their unique distributional properties. There is no comparable missed generalization in inferential-realizational theories, in which “zeroes” don’t need to be lexically stipulated, but instead simply arise either because a default rule of concatenative morphology (e.g. ‑ed suffixation) is overridden by a narrower rule of nonconcatenative morphology (e.g. the rule invoking the ablauted alternant sang in the past tense) or because of the simple absence of any applicable rule (as in the past tense of cut).7

... Second reason: a word form’s morphology may overrepresent its content Alongside instances of underdetermination are instances of extended exponence, in which a single piece of content is simultaneously expressed by more than one exponent; in Chhatthare Limbu (Tibeto-Burman; Nepal), for example, a single instance of negation is often expressed by more than one instance of the affixal exponent ‑n, as in (). In order to reconcile this fact with assumption (b), one would seemingly have to assume either (i) that one of the negative affixes is semantically empty, (ii) that one of the negative affixes is deleted prior to semantic interpretation, or (iii) that one of the negative affixes is inserted after semantic interpretation has already taken place. The inferential-realizational alternative is to assume that the inflectional rule introducing the negative affix ‑n applies in two different affix positions in realizing the property set {past, neg, :{ pl}, :{ du excl}}.8 ()

7

si-a-n-lapp-a-cΗi-ŋa-n ask1 - ! -  - ask2 -  - : - . -  ‘They did not ask us.’ (Tumbahang : )

Examples like cut are problematic for inferential-incremental theories, where they necessitate a number of rules that introduce morphosyntactic properties without any phonological effect. 8 Examples like (6) are problematic for inferential-incremental theories, where the redundant reapplication of a rule seems to produce the expected phonological increment without any increment in content.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

... Third reason: a word form’s morphology may misrepresent its content In some instances, a word form’s interpretation deviates from the interpretation that its morphological structure seemingly entails. In a deponent paradigm, an inflectional pattern “lays aside” (de ̄pōnit) its expected function to serve a different function. Latin verbs (e.g. ̄  ‘prepare’) ordinarily exhibit different patterns of inflection in the active and passive voices; but in the inflection of deponent verbs (e.g. ̄ ̄  ̄ ‘try’), the inflectional patterns that are usual for passive forms are instead employed for expressing the active voice, and forms expressing the passive voice are lacking (see Table .). Despite such evidence, the notion that a word form’s meaning is compositionally computable from its morphological structure is strikingly persistent. One reason for this is that outside the domain of inflection, principles of word formation often seem to define morphological structure and semantic structure in parallel. In the Sanskrit verb form didhārayis: ati ‘s/he wants x to cause y to hold’, the root dhṛ - ‘hold’ is the basis for the derived causative stem dhāraya- ‘cause to hold’, which is in turn the basis for the derived desiderative stem didhārayis:a- ‘want to cause to hold’. This word form might be assigned the structure in Figure ., in which the asymmetrical c‑command relation between the desiderative morpheme ‑is:a (which induces prefixal reduplication of the stem) and the causative morpheme ‑ay (which strengthens the root vowel) corresponds to the fact that in this verb form’s interpretation, the desiderative component has wider scope than the causative component. There are, however, two possible explanations for the compositionality of word forms such as didhārayis:ati. One is that (b) is a synchronic principle of inflectional semantics, and that didhārayis: ati exhibits structural compositionality in the specific sense that it consists of a hierarchical arrangment of nodes such that the content of each node is a

Table .. Imperfective present indicative forms of two Latin verbs ̄ ‘prepare’

̄ ̄ ı̄ ‘try’

Active

sg sg sg pl pl pl

parō parās parat parāmus parātis parant

cōnor cōnāris cōnātur cōnāmur cōnāminı ̄ cōnantur

Passive

sg sg sg pl pl pl

paror parāris parātur parāmur parāminı ̄ parantur

(none)

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



V -ti ← 3 singular present indicative active

V di-

V V dhār

-is ̣a -ay

← desiderative (induces prefixal reduplication) ← causative (induces strengthening of dhr ̣ to dhār) ← ‘hold’

 .. Hierarchical, morpheme-based representation of Sanskrit didhārayis: ati ‘s/he wants x to cause y to hold’

function of the content of the nodes that it dominates. The other explanation is that didhārayis:ati exhibits functional compositionality, in the specific sense that if the form of stem B is a function of that of a simpler stem A, then the meaning of B is likewise a function of the simpler meaning of A; thus, just as the desiderative stem form didhārayis:ais a function of the causative stem form dhāray‑, the meaning of didhārayis:a- ‘want to cause to hold’ is a function of that of dhāray- ‘cause to hold’. The latter explanation does not presume any hierarchical structure comparable to that of Figure . and is therefore compatible with the amorphousness hypothesis. Both of these explanations entail a kind of parallelism between morphological form and semantic interpretation. Two important facts about this apparent parallelism should nevertheless be carefully borne in mind (though too often they are not). The first is that word formation sometimes fails to exhibit this parallelism; the second is that inflection very often fails to exhibit it. Consider first the domain of word formation. In Chichewa (NigerCongo, Bantoid), the causative suffix ‑its precedes the applicative suffix ‑il in both applicativized causatives (as in (a)) and causativized applicatives (as in (b)). Evidence of this sort shows that in some cases, the relative order of two affixes is purely a matter of morphological convention rather than the reflection of any kind of form/content parallelism; in many languages, clusters of such conventions may together constitute an overarching templatic pattern of affix ordering for which there is no obvious semantic justification. Templatic patterns of this kind may in turn interact with (perhaps structurally, perhaps functionally) compositional patterns, which may determine certain permissible deviations from the template; Hyman () shows that the set of such interactions in Chichewa is unusually complex. ()

Chichewa (Hyman ) a. Applicativized causative: alenjé a-ku-líl-íts-il-a mwaná ndodo. hunters --cry--- child sticks ‘The hunters are [making the child cry] with sticks.’ b. Causativized applicative: alenjé a-ku-tákás-its-il-a mkází mthíko. hunters --stir--- woman spoon ‘The hunters are making the woman [stir with a spoon].’

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

In the inflectional domain, differences in morphological structure often have no semantic significance. In Swahili, for example, the prefixal exponent of negation in indicative verb forms is ordinarily ha‑, which is peripheral to a verb’s subject-agreement marker (as in (a)); in subjunctive verb forms, by contrast, the prefixal exponent of negation is si‑, to which a verb’s subject-agreement marker is itself peripheral (as in (b)). Despite the difference in affix order, there is no corresponding difference in the semantic scope of negation. ()

a. ha-tu-ta-som-a (indicative) ---read- ‘we will not read’

b. tu-si-som-e (subjunctive) --read- ‘that we may not read’

Other instances of inflection exhibit an outright mismatch between morphological form and semantic content; recall the Latin examples in Table ., in which two contrasting kinds of content are recurrently expressed by the same inflectional morphology. Clearly, a theory of inflection cannot purport to define every word form’s content in a structurally compositional way, from a hierarchical arrangement of morphemes. This conclusion suggests that it is the lexeme rather than the morpheme that is the fundamental unit of morphological and syntactic analysis—that (i) derivation and compounding create lexemes, (ii) inflectional morphology realizes a lexeme’s association with particular morphosyntactic property sets, and (iii) lexemes (or their realizations) are inserted into terminal nodes in syntax. In a lexeme-based theory, the content of derivatives and compounds may be attributed to functional compositionality, at least in the default case; but in the inflectional domain, issues of compositionality are essentially irrelevant if inflected forms are defined realizationally rather than incrementally.

.. Paradigms In view of their commitment to the centrality of lexemes, inferential theories of inflection tend to ascribe theoretical importance to inflectional paradigms: the definition of a language’s inflectional morphology is thus equated with the definition of its inflectional paradigms, from which word forms can be assumed to be inserted into terminal nodes in syntax. That is, inferential theories generally acknowledge morphological relations in two dimensions. In the syntagmatic dimension, a complex word form w in the paradigm of lexeme L is related to a more basic stem of L through a series of operations introducing w’s inflectional exponents; in the paradigmatic dimension, a complex word form in the paradigm of L systematically contrasts with other complex word forms in L’s paradigm. Thus, in the inflection of Latin ̄ , the word form parābātur ‘it was being prepared’ is systematically related to the stem parā- through a series of operations, as in (); but it is likewise systematically related to other, contrasting word forms in the paradigm of ̄ , since its form and content is directly deducible from theirs, as in (). () ()

stem parā- ! parābā- ! parābāt- ! parābātur ⟨parābat, {rd singular imperfect indicative active}⟩ implies ⟨parābātur, {rd singular imperfect indicative passive}⟩

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



The notion that a language’s inflectional morphology is defined in terms of paradigms is appealing, because many significant generalizations about inflection are generalizations about paradigms. Let the paradigm of a lexeme L be a set of cells, each the pairing hL, σi of L with a morphosyntactic property set σ. Suppose, hypothetically, that there is a significant generalization relating the realization of hL, σ₁i to that of hL, σ₂i (contrasting cells in the paradigm of a single lexeme L) or relating the realization of hL₁, σ₁i to that of hL₂, σ₂i (contrasting cells in the paradigms of two distinct lexemes L₁, L₂); suppose, in addition, that this cannot be plausibly seen as a generalization about a single rule or exponent, but is instead a generalization about whole word forms that holds true whatever their specific morphology might happen to be. In that case, the generalization is, irreducibly, a generalization about paradigms. Real inflectional systems exhibit many instantiations of these hypothetical states of affairs. Consider the phenomenon of syncretism. In Sanskrit, the nominative form of a neuter nominal is, in all three numbers, invariably identical to the corresponding accusative form; the paradigm of the neuter a-stem noun  ‘fruit’ in Table . illustrates. This example of syncretism is directional, since in the singular, it is clearly the realization of the nominative cell that is dependent on that of the corresponding accusative cell: the nominative/accusative singular suffix ‑m in the paradigm of  serves only as an accusative singular suffix in the paradigms of masculine a‑stem nouns such as  ́ ‘horse’. Not all instances of syncretism are directional, however. A Sanskrit nominal’s instrumental, dative, and ablative case forms are invariably syncretized in the dual (see again Table .) and there is no good basis for regarding any of these three forms as dependent on any other. The same is true of the syncretism of the genitive and locative dual and that of the dative and ablative plural. These are facts about full word forms belonging to the same paradigm, facts which hold true independently of the particular morphology that these forms exhibit. Syncretism is like deponency in the sense that both involve forms that are alike serving functions that are not alike. In the case of syncretism, these are identical forms realizing different cells in the same paradigm; by contrast, saying that a lexeme exhibits deponency is a generalization over morphologically similar word forms realizing different properties in different paradigms.

Table .. Declension of two a-stem nouns in Sanskrit  ‘fruit’ (neuter) Sg Nom Voc Acc Inst Dat Abl Gen Loc

phalam phala phalam phalena phalāya phalāt phalasya phale

Du

Pl

phale

phalāni phalais

phalābhyām phalayos

phalebhyas phalānām phales: u

 ́ ‘horse’ (masculine) Sg as ́vas as ́va as ́vam as ́vena as ́vāya as ́vāt as ́vasya as ́ve

Du as ́vau as ́vābhyām as ́vayos

Pl as ́vās as ́vān as ́vais as ́vebhyas as ́vānām as ́ves: u

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Suppletion is likewise a property of paradigms. Ordinarily, the word forms in a lexeme’s paradigm are based either on a single stem or on a set of alternating stems whose phonological differences conform to a general pattern of alternation. In some instances, however, one or more stems in a lexeme L’s paradigm differ phonologically from L’s remaining stems in ways that are completely idiosyncratic; the relation among such stems is one of suppletion. In Latin, the verbal lexeme  ‘carry’ has fer- as its default stem but tul- as its perfect stem and lat- as its supine and past passive participial stem; the partial paradigm in Table . exemplifies this suppletive relation. The fact that the word forms in Table . realize the same lexeme is a fact about the constitution of a paradigm. The paradigm-based conception of inflection raises some fundamental questions for morphological theory. In pedagogical descriptions of a language’s inflection, it is not unusual to find periphrastic expressions included as part of a lexeme’s inflectional paradigm. In Latin, perfective passive forms are periphrastic (Table .), but it seems natural to include these passive forms as part of the verb’s paradigm, since imperfective

Table .. Indicative inflection of Latin  ‘carry’ Present Active

Passive

Imperfect

Future

Perfect

Pluperfect

Future perfect

Sg

 ferō  fers  fert

fere ̄bam fere ̄bās fere ̄bat

feram fere ̄s feret

tulı ̄ tulistı ̄ tulit

tuleram tulerās tulerat

tulerō tuleris tulerit

Pl

 ferimus  fertis  ferunt

fere ̄bāmus fere ̄bātis fere ̄bant

fere ̄mus fere ̄tis ferent

tulimus tulistis tule ̄runt,

tulerāmus tulerātis tulerant

tulerimus tuleritis tulerint

Sg

 feror  ferris  fertur

fere ̄bar fere ̄bāris fere ̄bātur

ferar fere ̄ris fere ̄tur

lātus sum lātus es lātus est

lātus eram lātus erās lātus erat

lātus erō lātus eris lātus erit

Pl

 ferimur fere ̄bāmur fere ̄mur  feriminı ̄ fere ̄bāminı ̄ fere ̄minı ̄  feruntur fere ̄bantur ferentur

lātı ̄ sumus lātı ̄ estis lātı ̄ sunt

lātı ̄ erāmus lātı ̄ erātis lātı ̄ erant

lātı ̄ erimus lātı ̄ eritis lātı ̄ erunt

Table .. Perfective present indicative forms of Latin ̄  ‘prepare’

sg sg sg pl pl pl

Active

Passive

parāvı ̄ parāvistı ̄ parāvit parāvimus parāvistis parāve ̄runt

parātus sum parātus es parātus est parātı ̄ sumus parātı ̄ estis parātı ̄ sunt

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



passive and perfective active forms (Tables . and .) are all expressed synthetically. The question therefore arises whether the range of expressions defined by a language’s morphology is not limited to syntactic atoms and their parts, but also includes some syntactically complex expressions. If the province of morphology is extended in this way, then criteria must be found for distinguishing periphrases (syntactic combinations that are in some way defined or determined by the morphology) from simple, nonperiphrastic combinations defined by the syntax. A related set of issues concerns the status of clitics in grammatical theory. Clitic groups such as John’s are in a sense the opposite of periphrases: whereas periphrases appear to be syntactic combinations that are defined by a language’s morphology, clitic groups appear to be morphological combinations the distribution of whose parts is defined by a language’s syntax. Another important issue raised by the postulation of paradigms concerns the variety of mismatches that exist between the word forms constituting a lexeme’s paradigm and their content. As we have seen, deponency and syncretism are mismatches of this sort: in a deponent paradigm, an inflectional pattern “lays aside” its expected content to express a contrasting content; syncretized members of a paradigm are alike in form but distinct in content. A particularly interesting class of mismatches involves what Aronoff () has called morphomes—properties, categories, and patterns to which a language’s morphology is systematically sensitive but to which all other grammatical components are blind. Consider, for example, the inflection of verbs in Hua (Trans-New-Guinea), described by Haiman (b). Hua verbs inflect for a dozen different moods; for each mood, they exhibit a special inventory of subject-agreement suffixes. Table . shows the interrogative forms of the verbal lexeme  ‘do’. Among these forms, ‑ve is the default agreement suffix, appearing wherever it is not overridden by the more narrowly distributed suffix ‑pe. In the dual, ‑ve is preceded by the glottal stop (here represented as -’); in the second person singular and the first person plural, ‑ve is overridden by ‑pe. What is the significance of ‑pe? It does not seem to have a single, uniform interpretation; instead, it expresses second-person singular subject agreement in some forms and first-person plural subject agreement in others. This is not simply a case of accidental homophony: second-person singular subject agreement and first-person plural subject agreement are expressed alike in all twelve moods, as Table . shows. These facts suggest that second-person singular forms and first-person plural forms share a morphomic property P that is realized by the suffixes on the bottom

Table .. Interrogative forms of the Hua verb  ‘do’ arranged by

arranged

morphosyntactic content

morphomically

singular dual 1st hu-ve 2nd ha-pe 3rd hi-ve

plural

hu-’-ve hu-pe ha-’-ve ha-ve "

"

default dual 1st hu-ve 2nd ha-ve 3rd hi-ve

P

hu-’-ve hu-pe ha-’-ve ha-pe

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Relative

Purposive

Concessive-expectant

Inconsequential

Subordinate

Exclamatory

default -e

-ve

-ma’

-mi’

-va

-mana -ga

-ma

-mane -mae -hipana -hine

DU

-’e

-’ve

-’ma’ -’mi’ -’va

-’mana -’ga

-’ma -’mane -’mae -’hipana -’hine

P

-ne

-pe

-pa’

-pana -na

-pa

-pi’

-pa

Apodosis

Protasis

Counterfactual

Assertive

Medial Coordinate

Indicative

Interrogative

Table .. Hua mood/agreement suffixes

-pane -pae -sipana -sine

Source: Haiman (1980b: 48).

row of Table .. Because it is morphomic, P has no syntactic or semantic significance; it is visible only to the morphology of Hua. Such mismatches between form and content are widespread in inflectional morphology and are especially compelling evidence for the hypothesis that morphology is an autonomous grammatical subsystem. Stump (, b, a) and Stewart and Stump () argue for a conception of inflection that distinguishes a lexeme L’s content paradigm (whose cells are pairings of L with morphosyntactic property sets with which L is associated in syntax and which contribute to its semantic interpretation) from a stem X’s form paradigm (whose cells are pairings of X with property sets for which a morphological realization is defined). Canonically, each content cell ⟨L, σ⟩ in a lexeme L’s content paradigm shares the realization assigned to the form cell ⟨X, σ⟩ in the form paradigm of L’s stem X. Very often, however, this canonical relation is overridden: • In instances of syncretism, two content cells correspond to the same form cell, whose realization they both share; in Sanskrit, the cells ⟨, {nom sg}⟩ ⟨, {acc sg}⟩ in the content paradigm of  ‘fruit’ correspond to the same form cell ⟨phala, {acc sg}⟩ whose realization phalam they both share. • In instances of suppletion, cells in a single content paradigm correspond to cells in distinct form paradigms; in Latin, the cells in the content paradigm of  ‘carry’ correspond to cells in three distinct form paradigms—those of fer-, tul-, and lāt-. • In instances of deponency, the property set σ of a content cell ⟨L, σ⟩ is distinct from the property set τ of the form cell ⟨X, τ⟩ whose realization it shares; in Latin, the active cell ⟨̄ ̄ ̄, { singular present indicative active}⟩

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



in the content paradigm of ̄ ̄ ̄ ‘try’ corresponds to the passive form cell ⟨cōnā, { singular present indicative passive}⟩, whose morphologically passive realization it therefore shares. • Where content cell ⟨L, σ⟩ corresponds to form cell ⟨X, τ⟩, τ may contain a morphomic property that determines the realization of ⟨X, τ⟩ and hence also of ⟨L, σ⟩; in Hua, the cells ⟨, { singular interrogative}⟩, ⟨, { plural interrogative}⟩ in the content paradigm of  ‘do’ correspond to the form cells ⟨hu, { P interrogative}⟩, ⟨hu, { P interrogative}⟩, whose shared morphomic property P is realized by the interrogative suffix ‑pe. Spencer and Stump () discuss another kind of mismatch between content paradigms and form paradigms in Hungarian. See also Round () and O’Neill (a, b) for similar proposals about distinguishing a word form’s morphosyntactic content from the morphomic properties by which that content acquires an inflectional realization. Summarizing, lexical theories hypothesize that a language’s grammar defines hierarchically organized morphological structures for inflected word forms, and that these structures conform to independently motivated syntactic principles. Inferential theories tend to reject this hypothesis in favor of the amorphousness hypothesis () and to assume that the definition of a language’s inflectional morphology makes essential reference to a paradigmatic dimension of structure that has no direct analogue outside of morphology.9

. W         ?

.................................................................................................................................. A rather stark difference between lexical theories of inflection and inferential theories resides in their conception of the distinction between concatenative and nonconcatenative inflection. To see this, consider first some characteristics of morphological operations. Logically, a concatenative morphological operation concatenates two or more morphological units, while a nonconcatenative morphological operation effects some modification of a morphological unit’s form; but things are not always as clear-cut as this terminological distinction implies, since a concatenative morphological operation may have nonconcatenative correlates (as in opaque + ‑ity ! opacity; bláckbird, *blackbírd). Concatenative morphological operations include prefixation, suffixation, and infixation. The third of these is manifested in a variety of ways; in particular, the place in which an infix

9 Nothing excludes a theory in which word forms are seen as (i) hierarchical combinations of morphemes but are, at the same time, (ii) organized into paradigms. Occam’s Razor, however, militates against such a theory, since it is unclear that any evidence necessitates both (i) and (ii).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

is positioned within the stem with which it combines may be identifiable according to any of a range of criteria (Yu ). A final concatenative process is compounding, the combination of two stems to produce a compound stem. Like infixation, compounding takes many forms: a given compounding operation may require a stem to appear in a special conjunct form (Franco-Prussian), it may require a stem to be devoid of any inflectional marking (pantleg/*pantsleg) or not (arms negotiator/?arm negotiator), it may require the combining stems to appear in a particular order (e.g. the head-final order of dog house and house dog), and so on. Nonconcatenative morphological operations include apophony (vowel and consonant gradation), reduplication, suprasegmental modification (including tonal and accentual modification), metathesis, and subtraction; all such operations have the effect of increasing allomorphy—that is, they create alternations between morphological units that have undergone the operation and morphological units that have not. Apophonic alternations may be qualitative (sing ~ sang ~ sung) or quantitative (Sanskrit ̣ ‘maker’: accusative singular kartār-am, vocative singular kartar, instrumental singular kartr-ā). Reduplicative morphological operations vary widely in their characteristics: the reduplicative formative may duplicate an entire stem or merely a part of it; it may be subject to specific prosodic requirements; and its reduplicated segments may or may not be exact copies of their models. (For detailed discussion, see Inkelas and Zoll .) Theories of inflectional morphology offer divergent accounts of the relation between concatenative and nonconcatenative inflection. In lexical theories, a qualitative distinction is drawn between the two: lexically listed morphemes carry content and may cause fundamentally contentless nonconcatenative morphological operations to modify the shapes of morphemes in particular combinations; that is, contentless nonconcatenative morphology is seen as being triggered by contentive concatenative morphology. This is not obviously right, since some nonconcatenative morphological operations seem not to be associated with particular morphemes; whereas the ablaut alternation of tell ! tol‑d might be claimed to be activated by the ‑(e)d suffix, the ablaut alternation of sing ! sang is not associated with any overt affix. Such facts have led some proponents of lexical theories to postulate zero affixes that share the capacity of overt affixes to activate particular nonconcatenative operations (Halle and Marantz ). Such zero affixes are, of course, an artifact of the assumption that concatenative and nonconcatenative inflection have different roles in a language’s grammar. The alternative is to regard morphological operations effecting affixation as having the same status as morphological operations effecting nonconcatenative modifications: on this view, the operation of /ɪ/ ! /æ/ ablaut competes with the operation of ‑ed suffixation in realizing the property ‘past tense’, overriding it as the more narrowly applicable of the two operations. (By contrast, the operation of /ɛ/ ! /o/ ablaut does not compete with ‑ed suffixation, but instead supplies the past-tense stem of  to which ‑ed suffixation applies.) Thus, while lexical theories of inflectional morphology accord distinct grammatical status to concatenative morphology and nonconcatenative operations, inferential theories accord them the same status, and therefore predict that concatenative operations and nonconcatenative operations should be able to do the same kind of work in a language, and indeed to enter into competition with one another. There is a great deal of evidence that this is so. For example, some Arabic nouns have concatenative “sound” plurals

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



Table .. Broken plurals of five Arabic nouns Consonantism C3 C1 C2 Singular

Plural

C1aC2C3, e.g. k

a l

C1aC2aC3

j

a m

C1iC2C3 C1uC2C3

z. r

u m

C1aC2uC3

r

a j

C1iC2āC3

C1 i

i

l

b a l l h. u l

‘dog’ ‘camel’ ‘shadow’ ‘spear’ ‘man’

C2 ā C3 (kilāb, jimāl, z. ilāl, rimāh. , rijāl)

(hayawā n ‘animal’, plural hayawā n-āt) while others have nonconcatenative “broken” : : plurals, one of whose patterns is exemplified in Table .. And on the natural assumption that ‑ed is the default past-tense suffix in English (cf. drinked as an overregularization or speech error), one could argue that it competes with, and as a default is overridden by, the apophonic expression of past tense in drank. Lexical theories and inferential theories entail rather different accounts of a range of morphological phenomena. Lexical theories attribute template-like affix sequences either to the nesting of functional categories headed by those affixes in a hierarchical representation or to constraints on the linearization of those affixes; inferential theories instead impose an order of application on the rules introducing inflectional affixes. Accordingly, instances of variable affix ordering such as the freely varying order of Chintang verbal prefixes exemplified in () are also treated differently: in lexical theories, this variability must be seen as the effect of either variable nesting of functional categories or of unconstrained affix linearization; in inferential theories, it reflects the unconstrained ordering of rules introducing prefixal inflections. In lexical theories, two affixes that are in paradigmatic opposition compete for insertion into the same node in a hierarchical representation; in inferential theories, the rules introducing those affixes compete for the same position in the sequence of rule applications defining a word form’s inflectional morphology. A key difference relates to zero affixes: lexical theories distinguish between the presence of a zero affix and the absence of any affix; in inferential theories, by contrast, morphological representations are never distinguished in this way. ()

Free variation in the order of verbal prefixes in Chintang [Sino-Tibetan; Nepal] (Bickel et al. ) [NB: ns = nonsingular, A = actor, P = primary object.] u-kha-ma-cop-yokt-e. u-ma-kha-cop-yokt-e. A-P--see-- kha-u-ma-cop-yokt-e. ‘They didn’t see us.’ ma-u-kha-cop-yokt-e. kha-ma-u-cop-yokt-e. ma-kha-u-cop-yokt-e.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Summarizing, lexical theories privilege concatenative morphology as the core of a language’s inflectional morphology, hypothesizing that an inflected word form’s content is determined by its component morphemes (including, where necessary, zero morphemes) and that nonconcatenative morphology is contentless and secondary, in the sense that it is triggered by concatenative morphology. Inferential theories, by contrast, deny any significant difference between concatenative and nonconcatenative inflection, attributing both to operations by which complex inflected words are inferred from more basic stems.

. H       ’       ?

.................................................................................................................................. In lexical theories of inflection, the relation between a word form’s morphosyntactic properties and its (concatenative) inflectional exponents is specified in the individual exponents’ lexical entries. In inferential theories, the relation between a word form’s morphosyntactic properties and its (concatenative or nonconcatenative) inflectional exponents is specified by rules—rules that introduce properties and exponents concomitantly (in incremental theories) or rules that specify a word form’s exponents as an expression of the property set with which it is associated (in realizational theories). Two kinds of realization rules are distinguished in inferential-realizational theories of inflection. A rule of exponence associates an individual exponent with a particular morphosyntactic content; thus, rules of exponence relate stems to complex word forms in the syntagmatic dimension, as in (). By contrast, an implicative rule relates the form and content of one complex word form to those of one or more other complex word forms, as in (). A distinction is sometimes drawn (Blevins ; O’Neill ) between ‘constructive’ theories of inflection (which rely on rules of exponence) and ‘abstractive’ theories of inflection (which rely on implicative rules), but this is an artificial distinction, since a theory could of course incorporate both rules of exponence and implicative rules; indeed, rules of exponence sit side-by-side with rules of referral in many theories (Zwicky c; Stump ; Brown and Hippisley ), and rules of referral are one kind of implicative rule. Moreover, Stump and Finkel  argue that an implicative definition of an inflectional system can always be seen as a theorem of an exponence-based definition; that is, from a complete and accurate definition of the exponence relations exhibited by an inflectional system’s word forms, ordinary rules of logical deduction determine a complete and accurate definition of the implicative relations among those word forms. Ultimately, the notion that implicative rules obviate the need to postulate rules of exponence is a nonsequitur, since implicative relations also exist among rules of exponence (or equivalently, among the exponents that they define), and these are not always reducible to implicative relations among full word forms. One can profitably argue about whether a given inflectional pattern is more insightfully described by rules of exponence or by implicative rules,

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



but the notion that one must dispense with one class of rules in favor of the other fails to take account of all of the dimensions of inflectional systematicity. An important issue concerning the relation between a word form’s morphosyntactic properties and their inflectional exponents concerns the ordering of exponents. Are there universal principles of affix order or is the ordering of affixes an accident of each language’s history? The claim that the order of inflectional affixes is fully determined by semantic criteria may be true of individual languages or language families (cf. Rice ), but it is not universally true. The order of subject- and object-agreement inflections is variable in Chintang but is fixed in many other languages (as e.g. in Swahili); cf. (), (). Moreover, one pattern of affix ordering may be rigidly adhered to in one language, while its opposite is rigidly maintained in another; compare Latin amā-ba-m [love--] ‘I loved’ with Welsh Romany kamá-v-as [love--] ‘I loved’. Still, the diachronic morphologization of certain recurring syntactic collocations statistically favors certain kinds of orders (Bybee : ff.). Summarizing, theories of inflectional morphology vary widely in their conception of the association between a word form’s morphosyntactic properties and their exponents. In lexical theories, these associations are lexically listed; in inferential theories, they are specified by rules for the definition of complex word forms. In incremental theories, complex word forms acquire their morphosyntactic properties as a concomitant of acquiring their exponents; in realizational theories, a complex word form’s association with a particular property set logically precedes its inflectional realization.

. W        ?

.................................................................................................................................. Traditionally, inflectional morphology is distinguished from word formation (derivation and compounding; see Lieber (Chapter  this volume) concerning special issues relating to derivation). Logically, inflection defines word forms realizing a lexeme’s association with particular morphosyntactic property sets, while word formation defines complex lexemes as arising from other, simpler lexemes. But questions arise about whether a given piece of morphology is inflectional or derivational; for instance, should nicely be seen as realizing the association of the adjectival lexeme  with the property set {manner adverb}, or should it be seen as realizing an adverbial lexeme  deriving from ? A number of criteria are traditionally invoked to justify the classification of a particular piece of morphology as inflectional or derivational, but not all of these criteria are probative (Stump , d; Booij ). The most reliable distinction between inflection and derivation rests on another distinction, namely the distinction between morphosyntactic properties (which are imposed on word forms in certain syntactic contexts) and lexemic relations that are fundamentally independent of syntax. For instance, a noun in the syntactic context Those ___ are angry must be associated with the morphosyntactic property ‘plural’, but the ‘agent noun’ relation is simply a lexicosemantic relation between

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

lexemes such as  and . There is no syntactic context10 that imposes the property ‘agent noun’; that is, any syntactic context in which baker can be used (e.g. a ___ of bread) likewise allows non-agent nouns (loaf, lack, history, connoisseur) to be used. Morphological theories vary with respect to the status they accord to the distinction between inflection and derivation. Some have proposed that inflection and derivation occupy a continuum, arguing that grammatical theory draws no sharp distinction between them (Lieber : ; Di Sciullo and Williams : ff.; Bochner : ff.). Others have proposed that derivation and inflection occupy different places in the architecture of grammar, the former being defined over the lexicon (independently of syntax), and the latter, as post-syntactic operations on sentence structures (Perlmutter ). This so-called Split Morphology Hypothesis elevates the peripherality of inflection to derivation to the status of a theoretical principle; but as has been widely observed, inflection does not invariably follow derivation (Bochner ; Rice ; Booij ; Stump , ). In realizational theories of inflection, a possible hypothesis is that derivation differs from inflection because it is not realizational; that is, one might assume that while inflectional morphology realizes paradigm cells, derivation builds up complex lexemes in an incremental fashion, adding lexemic properties such as ‘agent noun’ as a concomitant of adding the morphology realizing those properties. This is not, however, a necessary assumption: one might instead assume that alongside its inflectional paradigm, a lexeme has a derivational paradigm whose forms are also defined realizationally (Beard ; Bauer ; Booij ; Stump : –). Thus, one might assume that the verbal lexeme  has the paradigms in Table .: an inflectional paradigm whose cells have word forms as their realizations and a derivational paradigm whose cells have lexemes as their realizations (each of which in turn has its own paradigms). This approach to derivation presents a number of challenges, because derivational paradigms exhibit more defectiveness and more semantic idiosyncrasy than ordinary inflectional paradigms. At the same time, this approach affords a natural account of the blocking

Table .. Paradigms of the verbal lexeme  Cell

Realization

Inflectional paradigm

⟨, {inf/sbjv/default pres ind}⟩ ⟨, { sg pres ind}⟩ ⟨, {past/irrealis/past ptcp}⟩ ⟨, {pres ptcp}⟩

produce produces produced producing

Derivational paradigm

⟨, agent noun⟩ ⟨, patient noun⟩ ⟨, action noun⟩ ⟨, agentive adjective⟩ ⟨, potential adjective⟩

    

10 This does not mean that no syntactic context imposes the thematic role of ‘agent’. A context such as [DP the N] purposely broke the law imposes that thematic role on DP; but the N node does not have to be occupied by a derived agent noun.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



relations observed among derived forms; for instance, it allows the blocking of * by  to be seen as the pre-emption of a derivational paradigm cell by . Whether or not this approach to derivation proves viable, it is true that various phenomena tend to obscure the boundary between inflection and word formation. One fact about this boundary is crucially important to morphological theory: the fact that inflection and word formation are not classes of morphological markings, but rather different uses of morphological markings. The significance of this fact is that in principle, the same morphological resources may be pressed into the service of either inflection or word formation. In Breton, the singulative suffix ‑enn serves to derive count nouns with individuated reference: derivatives of this sort may arise from mass nouns or from count nouns or adjectives with a related meaning (Table .). But plural collective nouns also generally have singulative counterparts in ‑enn , as in Table ..

Table .. Uses of -enn to derive count nouns with individuated reference in Breton a. From mass nouns douar geot kafe kolo

‘earth’ ‘grass’ ‘coffee’ ‘straw’

douarenn geotenn kafeenn koloenn

‘parcel of land; terrier’ ‘blade of grass’ ‘coffee bean’ ‘piece of straw’

b. From count nouns boutez c’hoant lagad prezeg

‘shoe’ ‘wish’ ‘eye’ ‘preaching’

botezenn c’hoantenn lagadenn prezegenn

‘a kick’ ‘birthmark’ ‘eyelet’ ‘sermon’

c. From adjectives bas koant lous uhel

‘shallow’ ‘pretty’ ‘dirty’ ‘high’

basenn koantenn lousenn uhelenn

‘shoal’ ‘pretty girl’ ‘slovenly women’ ‘elevated ground’

Table .. Singulatives of plural collective nouns in Breton Collective noun buzhug gwez kol logod sivi

‘earthworms’ ‘trees’ ‘cabbages’ ‘mice’ ‘strawberries’

Singulative noun buzhugenn gwezenn kolenn logodenn sivienn

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

In Breton syntax, the distribution of a plural collective and its singulative (e.g. buzhug ‘earthworms’, singulative buzhugenn ‘earthworm’) is indistinguishable from the distribution of an ordinary plural and its singular (e.g. paotred ‘boys’, singular paotr); that is, stems in ‑enn serve in the singular inflection of plural collective nouns. Thus, it does not make sense to classify ‑enn as either derivational or inflectional: in some instances, it marks derivatives, while in other instances, it marks the singular stem in the inflection of a plural collective noun. This suggests that the operation of ‑enn suffixation is independent of the inflection/derivation distinction but may be invoked both in the definition of a lexeme’s inflectional paradigm and in the definition of a particular kind of derivative.

. C: I   ,  ,    ?

.................................................................................................................................. Initially, this question might seem oxymoronic: given that inflectional morphology expresses a lexeme’s association with a morphosyntactic property set and that such associations arise in syntactic contexts, how could inflectional morphology be defined separately from syntax? But the fact that inflection expresses associations that arise syntactically only means that syntax and inflectional morphology share some of their theoretical vocabulary. Both are sensitive to properties such as ‘singular’ and ‘plural’ (appropriately called ‘morphosyntactic properties’). But the relation between a word form’s morphosyntactic properties and its inflectional exponents may, of course, be conditioned by purely morphological categories to which syntax, semantics, and phonology are simply insensitive. Latin  ‘farmer’,  ‘master’, and ̄ ‘chief ’ are all masculine, but the ablative singular of  is formed through the suffixation of ‑ā, that of  through the suffixation of ‑ō, and that of ̄ through the suffixation of ‑e; this difference is a consequence of the fact that  belongs to the first declension,  to the second, and ̄ to the third—a fact that has no implications whatever for the syntax, semantics, or phonology of these nouns. Similarly, some Sanskrit adjectives have both strong and weak stems (e.g.  ‘blessed’: strong stem bhágavant-, weak stem bhágavat-), and the distinction between those word forms based on the strong stem (e.g. masculine accusative singular bhágavant-am, masculine nominative plural bhágavant-as) and those based on the weak stem (e.g. masculine accusative plural bhágavat-as) has no direct correlate anywhere in the domains of syntax, semantics, and phonology. The fact that a language’s inflectional allomorphy may be conditioned by such purely morphological categories as ‘first declension’ or ‘strong stem’ is prima facie evidence against syntactocentric theories of inflection: if a language’s inflectional morphology were defined by its syntax, one would not be led to expect word-internal syntax to be conditioned by categories to which words’ external syntax is blind. Still, a proponent of syntactocentric inflectional morphology might argue that the formal devices necessary for the definition of a language’s syntax are equally necessary for the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



definition of its morphology. One hypothesis is that inflectional affixes are syntactic constituents whose distribution is determined by principles of phrase structure and operations of movement or insertion. According to one familiar analysis, the sentence Did John leave? has an underlying representation (a), which then undergoes head movement (to produce (b)) and do-support (to produce (c)); a late spelling-out rule then converts do + -ed to did. ()

a. [CP [C ø] [IP John [Iʹ [I -ed] [VP leave ]]]] b. [CP [C [ -ed]] [IP John [Iʹ [I t] [VP leave ]]]] c. [CP [C [ do + -ed]] [IP John [Iʹ [I t] [VP leave ]]]]

This analysis presumes a formative interface between syntax and morphology (Zwicky ); according to this presumption, a language’s syntax and morphology are defined over the same set of formatives, whether free or bound. The alternative is a feature interface, in which a language’s syntax and morphology are defined over a shared inventory of grammatical features; according to this presumption, morphological formatives are introduced by morphological rules, independently of the syntactic rules that determine the distribution of the grammatical features that they realize. Of the two, only a feature interface is compatible with the lexical integrity hypothesis (). Certain phenomena might be claimed to require a formative interface. Coordinate structures exhibiting right-node raising are generally ungrammatical unless the morphosyntactic properties of the “raised” node are compatible with both conjuncts: thus, () is ungrammatical because neither *have be nor *might been is possible. But if the “raised” node can be reconciled with both conjuncts as an effect of syncretism, then feature conflict is in some instances tolerated; for example, the syncretism of ’s past participial and infinitive forms makes () acceptable. ()

*I already have, and John certainly might, been/be nominated.

()

I already have, and John certainly might, run for office.

But the acceptability of () does not necessarily reflect the sensitivity of syntax to morphological syncretism: it may instead reflect a condition that allows conflicts with respect to certain morphosyntactic properties (e.g. ‘past participle’ vs ‘infinitive’) to be resolved by phonological ambiguity (Pullum and Zwicky ), in which case it does not strictly refer to morphology at all. Other claimed instances of syntactic sensitivity to words’ morphology are similarly inconclusive. While one can easily imagine what unmistakable counterevidence to the lexical integrity hypothesis would look like, real instances are conspicuously lacking.

. F  Amorphousness: Clitics: Content vs form:

Anderson (), Janda () Anderson (), Spencer and Luís (a) Matthews (), Stewart and Stump (), Stump (a)

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Default inflection: Defectiveness: Deponency: Lexical integrity: Morphemes: Morphomes: Paradigms: Periphrasis:

Properties of inflection: Suppletion: Syncretism:

Albright and Hayes (), Brown and Hippisley () Baerman, Brown, and Corbett (eds) () Baerman, Corbett, Brown, and Hippisley (eds) () Bresnan and Mchombo () Matthews (), Blevins () Aronoff (), Round (), Stump (b) Stump (), (c) Börjars, Vincent, and Chapman (), Sadler and Spencer (), Ackerman and Stump (), Ackerman, Stump, and Webelhuth (), Spencer (a), Chumakina and Corbett (eds) () Booij () Veselinova () Stump (a), Baerman, Brown, and Corbett ()

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ........................................................................................................................

MORPHOLOGICAL THEORIES ........................................................................................................................

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

 ......................................................................................................................

 

. B

.................................................................................................................................. T formative portion of modern linguistic activity and theorizing is often, for convenience’s sake, brought together under the term S, a term which hides at least as much as it reveals. The time period in question clearly includes the s through the s, but with sources stretching back earlier (e.g. Boas ; Bloomfield a; Sapir  []), and effects evident to the present day, although in the implicit (i.e. rarely directly credited) role of accepted basic procedures, such as phonemic analysis. To be positive first about the use of the term Structuralism, the work of those scholars who launched and refined what has become the autonomous discipline of linguistics (as distinct from allied and older disciplines like anthropology and philology) embraces the notion that languages fundamentally work as systems, as complex mechanisms, in ways that vary in complexity (cf. de Saussure  []: §., ). In that spirit, the constituent units and patterns that researchers identified in the languages they studied were described in relative independence from those (mainly major (Indo-)European) languages which were already known and established in scholarly discourse, in order to bring out the systemic coherence of each given language. A more problematic use of the term Structuralism, however, consists in the false impression of unanimity that arises in any attempt to summarize or condense the important and groundbreaking work of the period.1 The diversity of opinion and method is all the greater if it is considered that the same term was used to refer to parallel contemporary schools and developments in both Europe and the United States during the mid-twentieth century. The resulting flattened picture of what was a lively, if geographically and politically fragmented, debate is exacerbated in the context of a modern bias in linguistics education and publishing today. This bias may all too often result in only passing or relatively

1 The broader development of Structuralist linguistics is both fascinating and useful for understanding what came later, both inside Linguistics and in relations with its neighboring disciplines. For intellectual sources and critical history, the reader is referred to Matthews (, ), Andresen (), Hymes and Fought (), Vachek (), Lepschy (), among others.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

uncontroversial contextual references to Structuralist scholars and insights, which has the effect of marginalizing their work as basic, and thus superseded, and/or passé. Given their position in the history of the enterprise, it should not be surprising that much of the terminology from this foundational era populates everyday linguistic discourse today, especially—though certainly not exclusively—in introductory-level courses, but in a way that can leave the coiners of these terms and methods a merely virtual presence. It is not the case that linguistic scholars were operating in a vacuum in the early twentieth century. They were not obliged to invent all their own operational norms, for they had at least the existing models of historical, philological, and anthropological studies of language. The linguists who are counted among the Structuralist movement took on the larger view of creating a science of living language (Bloomfield ; Sapir ), one that had to be able to encompass any and all human languages, not just the culturally privileged languages, and the latter only in their canonical literary incarnations. The approach that grew up among these scholars was informed by all these existing models, but in fact replicated none of them. They set aside the primary importance of diachronic change to focus first on synchronic structural description, although most if not all these scholars had trained in historical linguistics, and many had already made notable contributions in diachronic Indo-European studies. The Structuralists strained at the conceptual and terminological distinctions established in the grammatical discourse of Latin and Ancient Greek studies. Despite the weight of tradition and the potential advantage to be gained from redeploying those ready-made descriptive tools, the conceptual framework did not comfortably match the patterns observed in languages found outside that domain, for example, American Indian languages. In this connection, they adopted from anthropology, notably through the influence of Franz Boas (), an insistence on analyzing previously unrecorded or inconsistently recorded indigenous languages, describing each new addition to the language database as a system on its own terms (e.g. Whorf : ). This ran a risk of terminal relativism, in principle, but it was foremost an antidote to false universalism.2 Among European contributions to Structuralist linguistics is the work of Louis Hjelmslev and the Copenhagen School. Fully cognizant of the multidisciplinary history of language research, Hjelmslev sought to bracket out a uniquely linguistic system for scientific description that did not depend on the established methods and categories of the humanities and social sciences, wherein language was seen more as a means to other insights and not as an end in itself (Hjelmslev  []: –). Hjelmslev’s name for this line of research, in light of its revolutionary orientation, was deliberately unique: G. While risking a duplication of effort in “discovering” the already-known, Glossematics was committed to ascertaining language as a self-contained yet multi-dimensional system of elements, functions, and deducible interrelations among these. By releasing linguistic analysis from not only the concerns, but also the available explanations, of adjacent Regardless of the descriptivist ethic of taking each language “on its own terms”, however, traditional models for classical and certain modern European languages retained an entrenched privileged position. Although Structuralist techniques for description had solidified to the point that they could be applied correspondingly to some of the more familiar languages such as English (Fries ; Bloch ) and French (Trager ), the resistant reception that emanated from even convinced descriptivists bespoke a lingering double-standard as to which “lower-stakes” languages were appropriate subjects for such bottom-up scientific analysis (cf. Sapir  []: ). 2

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





disciplines, Hjelmslev’s rather radical lingui-centrism was apparently a step too far for many tastes, even among avowed Structuralists. From these intellectual sources, but with a newly conceived and animated mission— namely, that of describing living languages on empirical grounds, according to the observable practices of speakers and/or writers—Structuralist approaches to linguistic documentation and analysis developed as an explicitly scientific endeavor.

.. Structuralism and morphology The linguists in the United States who have come to be called American Structuralists3 by some (see Hymes and Fought : –, fn.  with respect to the issue of identifying and naming a “school” of American linguistics) focused on the concrete goals of their research, first and foremost the description of the structure of living, natural human languages. In this context they themselves tended broadly to refer to their approach first as  , then as  . The training that these scholars had experienced within the historical linguistic enterprise, when read against the grain, served to emphasize the significance that synchronic studies hold for the conduct of diachronic analysis: the latter depends on the former, but synchronic analyses may be performed in their own right, whether or not they are put to use in a diachronic investigation. Much of the linguistic data gathered in this mode by American structural linguists was derived from field interviews, and indeed training for field research was a significant part of linguistic training in the first half of the twentieth century. Ethnographic and elicitation skills obtained by anthropologists were transferable to, and refined within, the purely linguistic context, and many field-collected grammars were published in the model sanctioned by Boas, outlined most specifically with regard to morphology in his introduction to the Handbook of American Indian Languages (: –). One of the key figures in American linguistics, Edward Sapir, contributed to volume  of the Handbook an extensive grammar of the Takelma language, in line with the model prescribed by Boas. The debate concerning the identification and representation of the internal structure and composition of words is highlighted in the contrasting approaches of Sapir and Leonard Bloomfield. These two linguists were responsible for much of the early direction of what became the discipline of linguistics in the United States. Sapir ( []) and Bloomfield () each wrote an acknowledged classic textbook surveying the field, and not coincidentally, both books are entitled Language. In Sapir’s text, he speaks of the range of  (morphological)  that a particular language makes use of ( []: –) and the distinct array of   that may be associated with one or another process ( []: –). In this way, Sapir recommends the study of “linguistic form . . . as types of patterning, apart from the associated functions” ( []: ), and thus the picture is one of independent dimensions of the signification relationship, 3

As a means of capturing the historical significance of this group of scholars in the history of American linguistics, witness the presence of the following individuals on the roster of presidents of the Linguistic Society of America (LSA): Franz Boas [], Edward Sapir [], Leonard Bloomfield [], Charles C. Fries [], Bernard Bloch [], Zellig Harris [], Roman Jakobson [], George L. Trager [], Charles Hockett [], and Eugene A. Nida [].

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

rather than the more popular (because concrete) unitary sign-like . Sapir also mounts theoretical work in detail concerning competing morphological typologies, using conceptual and processual criteria for characterizing language types ( []: –). In Bloomfield’s discussion of morphology in Language, however, he pays scant attention to schemes proposing morphological language types (: –), and instead proceeds directly to the isolation of free and bound forms.4 In couching his discussion of morphological analysis in terms of immediate constituency, arrangements, and constructions, he introduces a hierarchical dimension above and beyond the linear sequencing of meaningful units (morphemes). Bloomfield does not shy away from aspects of morphology that do not fit neatly into the model of uniform, segmentable, meaningful units, however, accounting for shape alternations of morphemes by deriving each from a uniform basic alternant (ideally, but not always, an actually occurring shape of the morpheme in question, using an adapted phonological rule system; : –). Where morphological content is rendered without recourse to any identifiable affix, including segment-level substitution (man ! men), subtractive morphemes (French platte [plat] ‘flat (f.)’ ! plat [pla] ‘flat (m.)’), and zero-alternants (put ! put (past)), Bloomfield does speak of such processes as “difficulties” for the method of analysis into immediate constituents, and proposes limited generalizations that are keyed to lexical classes or named morphemes, with the logical extreme of direct stipulation of unpredictable behavior in the lexicon (see §..). Zellig Harris produced the Structuralist procedural manual (Methods in) Structural Linguistics (), which takes as a fundamental assumption that all grammatical terms and statements were to be relative to the criterion of their relative distributions (: ), using substitution techniques that identify the phonological elements, discern the distributional relationships among them, and then repeat the cycle, mutatis mutandis, for the morphological level, broadly conceived (: ). The result of this painstaking procedure is a highly formalized, detailed statement of the grammar of the language under consideration (see §.). Harris proposed an approach to   (), an alternative to the additive-only morpheme, such that adding phonemes, “adding” zero, dropping phonemes, or altering existing phonemes at the subsegmental level could all have morphological function. Any of these morpheme alternants is to be recognized in comparison to other alternants, allowing, for example, the “placement” of a zero-alternant that would be assumed to pattern with one or more particular overt morpheme alternants (: ). Alternants found to be in complementary distribution could be grouped into  . The intention of the proposed revisions to the morphemic method is to include atypical, but undeniably morphological, elements such as reduplication (: , ), up to and including partial suppletion (: ). In light of his subsequent work, Harris would want to accomplish at least the discovery of alternants without reference to associated meanings (§..), but in the () article, shared meaning among alternants of the same morpheme unit is presumed. Harris separately proposed an analysis of agreement markers as belonging to so-called   (), morphemes that appear in sets, distributed over syntactically linked words. Beginning with gender and

4 Bloomfield’s chapter  is entitled “Morphologic Types” (: –), but the discussion is entirely about methods and patterns of compounding and derivation, together with the morphemic status of the forms involved, not at all about language typology as discussed by Sapir.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





number agreement on the determiner and/or adjective dependents of nouns in cases where the identical phoneme string appears on each (: ), he proceeds to the less straightforward (but still regular) formal mismatching that comes with distinct declension patterns (: ). From this perspective, alternant-sets with similar or different constituent markers and, orthogonally, with more or fewer instances in the set (depending on the complexity of the syntactic structure in which the agreement holds) are analyzed as being distributed in complementary environments, and as such, they can legitimately belong to single morpheme units, given a constant grammatical meaning (: –). Charles Hockett’s major contributions to Structuralist morphological theory include a survey of phenomena that challenge the segmentable, unitary morpheme (), and in which he offers theoretical refinements to better incorporate the existence of morphophonemic alternation (: ), empty morphs and portmanteau morphs (: –), as well as zero morphs (: ). Hockett also published ‘Two Models of Grammatical Description’ (), which contrasts the dominant model in Structuralist morphemics, the so-called I  A (IA) model against an apparently more dynamic approach patterned more after suggestions in Sapir ( []), the I  P (IP) model. The IA model had not only the advantage in number of contemporary users, it was also more fully formalized, owing to its use of the principle of immediate constituency. Rather than take this accidental fact of history as destiny, however, Hockett () attempts to develop a formalism in IP terms that could stand it on equal footing with IA. This work of revealing actual, rather than fortuitous, distinctions between theoretical frameworks deserves recognition. Eugene Nida published his textbook Morphology (), offering theoretical and practical guidance in descriptive morphology, including substantial materials for the practice of field linguistics. In a  article, Nida took stock of contemporary morphological theorizing and called into question what he saw as a “rather disturbing” trend, the over-eagerness among analysts to opt for setting up zero morphs given any analogical pretext, and to deny morphemic status to overt “empty” morphs for which they could not conceive a statable meaning (: –; cf. Hjelmslev : ). Nida suggests that although it might provide a more manageable description for the linguist to fragment meanings into discrete morphemes, the resulting grammar would be fundamentally at odds with anything that might occur to the native user of the language (: –). Nida offers a thorough set of principles designed to decide whether a proposed set of alternants is to be legitimately grouped into a single morpheme unit, in consideration of not only distribution, but also phonological and semantic similarity. The principles provide a check, for example, on remote underlying representations that permit too-easy lumping, or on the other hand, splitting among homophonous morphs with other than  percent identical meanings. Looking more broadly, the separation experienced by the Structuralist linguistic communities in the United States and Europe was recognized, and in time, attempts were made at least to lessen the degree to which members of the distinct constituencies were “talking past each other” (see Fries : –). Roman Jakobson stands out among the linguists of the period in question for his trans-Atlantic presence in both the Prague Circle and in American linguistic practice as well. The brief attention he receives here is to highlight the degree to which his morphological work stands in contrast to the distribution-focused descriptivist practice. A significant contribution of Jakobson’s in this area is the extension of  theory to morphology from phonology. In phonology, markedness had

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

correlated with the presence of a particular feature, and unmarkedness with its absence. In morphology, by contrast, unmarkedness functions more like underspecification in that some positive assertion is being made about the marked member (a particular gender specification, for example), whereas the corresponding value for the unmarked member that opposes it is unspecified for the value in question (Jakobson : –; Vachek : –; Andrews : ). This pertains more to the grammatical or lexical meaning of morphological elements than to their formal constitution, but it does indeed bear on their relative distribution (overlapping, rather than complementary). Vachek’s (: –) survey of morphological contributions from the Prague Circle is admittedly thin beyond the level of  (: –), and this relative lack of attention is in part attributed to the attitudes of the Circle’s founder, V. Mathesius, who is said to have held morphological analysis to be of little relevance (: ). Further contributions that Vachek raises for special attention include the typological theory of V. Skalička, who refers to the familiar distinction between agglutinative and inflectional languages, and a scheme of “morphological exponents” put forward by B. Trnka, ranging in size from the (quasi-)phonological segmental substitutions with morphological value to compound words. Both these projects resonate with some of the above-mentioned typological investigations presented in Sapir’s Language, and it is perhaps in this regard that Sapir’s particular sensibility and orientation can be noted, as distinct from the other American Structuralists. Both Sapir and Bloomfield portrayed linguistics as an explicitly scientific pursuit, applicable ultimately to all languages, and as such, both men sought to leverage the analytic precision that had emerged in diachronic study into a method for analyzing and describing synchronic structures. The synchronic method launched under the Structuralist banner started with phonemics (see Sapir ), and indeed the phonemic method is a significant and enduring achievement. The pivot from inherently meaningless units of sound to the analysis of morphological structure and systems was based in the analogical attempt to map allomorphs into morphemes as allophones had been mapped into phonemes. The complications added in the shift of levels provoked much of the discussion that follows, and is echoed in decisions and methods in most every other succeeding chapter in the present part of this Handbook.

. T    

.................................................................................................................................. As a guide to the sort of hierarchical architecture that the Structuralists assumed, perhaps the best indication is to be found in the organizing structure informing the respective texts of Bloomfield () and Harris (). It is important to note that the purpose of these models was the design of a linguist’s descriptive grammar, not a psychological model of language use. Bloomfield brackets off the domain of “pure” phonetics for study by physical scientists and the study of semantics as being beyond “our knowledge of the world we live in” (: –). Between these ends of meaningless form and formless meaning, Bloomfield posits the following layers or levels of linguistic structure:

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





. Phonemics, segmental and suprasegmental, in language-specific primary and secondary uses. . Morphemics, bound and free, prototypically segmentable and linearly contiguous, but also available as alternations/substitutions, suppletion, subtraction (minusfeatures), and zero-alternants. . Syntax, involving those “constructions in which none of the immediate constituents is a bound form” (: ), defined schematically with reference to functional positions. Between  and  is space for the linguist’s activity of positing morphophonemic structure, abstract sound structure that allows for more compact characterization of alternations linking morphologically related forms (§..). Between  and  are “border-line cases . . . of compound words and phrase-words” (: ), categories that form words out of more than one free element (§..). Harris’s () schema for grammatical description, fully enumerated, comprises the following lists of units, alternants, and environments (–): . . . . . . . .

Segment-Phoneme List Phoneme Distribution List Automatic Morphophonemic List Non-automatic Morphophonemic List Classified Morpheme List Morpheme Sequence List Component and Construction List Sentence List.5

Harris considered that the macro-list above would stand as an adequate synchronic grammar of a language, at least at the purely structural level.

.. Where is morphology: autonomous component or not? The distribution-based method of unit and pattern discovery underlies the Structuralist approach to describing the different levels of linguistic structure. The levels have their own units, patterns, and independent workings, and are therefore in at least that sense autonomous, but they operate in ways that inevitably have an impact on other levels. Bloomfield (: ) and Bloch and Trager (: ) identify morphology and syntax as the twin components of grammar, but the published discussion of morphology as opposed to syntax during the Structuralist period shows a marked imbalance in favor of morphological description and theorizing, as evidenced, for example, by the proportion of each of the books mentioned in §.. (see chapter ranges listed in §., ‘Further reading’). 5 Of course, the Sentence List is not a(n infinite) compilation of concrete sentences, but rather of the basic utterance structures, defined in terms of position classes and constructions, from which all other sentence-structures are built.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. B 

.................................................................................................................................. Hockett’s () article captured the fundamental theoretical split between a morphological method that performed operations on basic elements in a way that resembled phonological processes (Item-and-Process) and a morphological method that built complexes out of discrete units (Item-and-Arrangement), and which consigned the working out of sound-structural alterations to separate mechanisms. The struggle for the ultimate direction of Structuralist morphology was evident in the literature. While the classic textbooks of Bloomfield, Sapir, and Harris are important resources for morphology within a broader Structuralist overview, there is a series of articles—published primarily in the journal Language and collected (among many other articles of the period) in Joos ()—that serve collectively to demonstrate the push and pull of specifically morphological theorizing in the United States during the s and s. In this dialogue, there is constant reference to the guidance laid down in Bloomfield (), but in order to translate the foundational concepts into a reliable analytical method, these linguists interrogate the definitions, and proposed revisions thereto, both logically and empirically.

.. The representation of morphological processes There exist several catalogues of morphological processes in the Structuralist literature, featuring most prominently the process of affixation. Since affixes are more concrete additions to a base or stem, they have a particular prototypical status in morphology, and so they represent the sort of item that one tends to cite when asked for an example of a morpheme (English re-, ‑ness). A language with any morphology to speak of will be predicted to have at least some affixes, with other processes being possible but not obligatory, and furthermore the choice from among the other process types is not predictable. The question becomes what to do with the non-affixal processes. There are debates about whether morphemes must be made up of contiguous phonemes, whether phoneme substitutions or stress-shifts are legitimate as morphemes, whether meaning change without phonemic change is evidence of a zero-affix (and if so, where?). Reasonable people continue to disagree on these and other similar issues (see, e.g. Stump, Chapter  this volume). The insistence among Structuralists on a distribution-only method is in fact most amenable to slot-and-filler discourse: “Any sentence, phrase, or complex word can be described as consisting of such-and-such morphemes in such-and-such an order” (Bloch : –). Structuralists worked continuously on the matter of determining the units of word structure, some seeking to include as morphemes any process that could signal a particular lexical or grammatical meaning in the same way that an affix might, while others with a greater tolerance for abstraction (e.g. Bloch) set up as general practice segmentable units that matched one-to-one with distributional differences.

.. The basic units of morphological analysis For Boas, “[s]ince all speech is intended to serve for the communication of ideas, the natural unit of expression is the sentence” (: ). Further subdivisions constitute artificial

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





interventions, and their products, such as words, are not basic elements, but are fairly epiphenomenal. Of course, once the constructed nature of the subunits is made clear, it is the job of the linguist to nevertheless perform the investigation in which conventional phonetic groupings and their internal and external patternings may be described. In introducing some of the phenomena that are particular to American Indian languages, especially to an audience more familiar with Indo-European languages and traditional grammatical categories, Boas calls into question the very concept of word (: ), down to the level of ascertaining which portions of a given phonetic string are indeed independent and which subordinate to other units. In leading up to a discussion of the distinction of stem versus affix, Boas alludes to parts of sentences that convey the material contents of a sentence (e.g. the subject and predicate) and those that serve a modifying function with respect to those contents (e.g. adverbials). The analogy to word structure likens the stem of a word to the “material content” and affixes to modifiers, but in Algonquian, for example, he suggests that the apparent affixes on verb stems are so numerous “that it would be entirely arbitrary to designate the one group as words modified by the other group, or vice versa” (: ). Hjelmslev ( []: ) likewise prescribes starting with large and unanalyzed linguistic texts as wholes and a recursive identification procedure ( []: –) that must operate in parallel to bring out all units of a definable sort (i.e. substitution or commutation classes), and the dependencies of different types that map between elements of form and elements of substance. The implications of this procedure are admittedly large, but they depend on the empirical discoveries of the analyst rather than the application of a priori categories that may not have validity for any particular language. Without question, this agnosticism can be disorienting, but it does indeed follow from Hjelmslev’s fundamental separation of form and meaning, taken over from de Saussure. Rather than morphemes and words being inherent units with signifier and signified as aspects, the Glossematic approach to morphology is form-oriented. In a very clear example of this thinking, Hjelmslev (: ) offers the example of the French imperfective indicative form, which has two distinct uses at the level of meaning, the past imperfect and the present counterfactual (“irréal” supposé), neither use predictable from the other on semantic grounds, and so neither justified to stand as “basic”, with the other as “derived” from it. The semiological link between the form and each use (substance) is a distinct function in Hjelmslev’s terminology, and any apparent connection between the two uses is a deducible relation to be registered in an exhaustive description of French. Practically an epiphenomenon, such a relation is certainly not the cause of anything in the synchronic grammar.6 In contrast, an analysis using the classical morpheme-unit would engender, as has been noted above, a number of descriptive difficulties that are avoided on this account. Bloomfield () emphasizes the word, but since he is working with the “principle of immediate constituents”, he cannot leave words unanalyzed: the morpheme is his stock-in-trade. His definition of morpheme, “a linguistic form which bears no partial phonetic-semantic resemblance to any other form” (), is generally accepted as basic in the Structuralist morphological literature, although there is sporadic inquiry as to the exact interpretation of its constituent notions. A linguistic form is a cover term for meaningful units both larger and smaller than the word, and among morphemes, both free and bound. The notion of assessing 6

This form-only grouping is akin to Matthews’s () parasitic or Priscianic formation, or Aronoff ’s () morphomic level.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

semantic resemblance in a framework that eschews, at least rhetorically, the direct study of semantics is potentially problematic (§..). Nevertheless, it is Bloomfield’s contention that “every complex form is made up entirely of morphemes” (: ) and that “any morpheme can be fully described (apart from its meaning) as a set of one or more phonemes in a certain arrangement” (: , parenthetical in original). The latter principle would seem to rule out zero-morphs, but Bloomfield traces the “apparently artificial but in practice eminently serviceable” structural zero to the Hindu grammarians (: ), and uses it as needed in morphological analyses from that point on. In Bloch, the potential abuse of covert elements is checked by the following condition: “One of the alternants of a given morpheme may be zero; but no morpheme has zero as its only alternant” (: ). Thus, one may posit a zero-plural on sheep because plurality is usually overtly marked on English nouns, but one is not necessarily licensed to create by further analogy an always-zero singular morpheme.

. T -  

.................................................................................................................................. There is visible, within the typical presentation mode of Structuralist theorizing, the traditional division between word-formation (including both derivation and compounding) and inflection, but these terms are not put forward as assumed, self-evident, and distinct components. The distinction between those grammatical categories which must be expressed versus those categories that may be expressed is offered as a dividing-line by Boas (: ), but the specific set of grammatical categories that pertain to one side as opposed to the other are to be described on a language-by-language basis as a result of the process of formal description and distributional analysis, rather than on universal grounds. Bloomfield () states that the inflection/word-formation distinction is characteristic of many languages, but that “this distinction cannot always be carried out” (–).

.. The treatment of word-formation When addressing word-formation more generally, Bloomfield proposes the following categorization of words (: ), based on their immediate constituents: A. S words, containing free forms . C words, containing more than one free form (e.g. sidewalk, patchwork) . D  words, containing one free form (derivational walker, patchy, as well as inflectional glasses, landed ) B. P words, not containing a free form . D  words, containing more than one bound form (e.g. di-gress, pro-gress, trans-gress, but *gress; see also Italian cas-a/cas-e ‘house/houses’, but *cas) . M words, consisting of a single (free) morpheme (e.g. walk, patch).7 7 The intention of “containing a free form” here must imply additional content, i.e. at least one more (bound) morpheme. Thus, words in class B. do not “contain” free morphemes but rather “consist of ” exactly one free morpheme. The phrasing is less than ideal, perhaps, but it is not inconsistent.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





The order of morpheme addition determines the classification, by virtue of the strategy of immediate constituency: doorknobs and gentlemanly, although compound in their respective pedigrees, as it were, would be classed as (de-compound) secondary derivatives in view of the last-added suffixes (: ). Bloomfield places  (or so-called -) in a sub-class of secondary derivatives along with the results of stem-changing operations such as geese, rode, and stood, calling the lot secondary morpheme-words (: –). He maintains that the ‘zero’ is distributed as a feature, not as a morpheme per se, bound or otherwise, although he does allude to a zero-alternant of certain “normal” affixes (: ), which might easily give rise to the more concrete affixal interpretation within the broader IA analytical discourse. In order to describe a compound, Bloch and Trager (: ) require the identification of the lexical class of the whole compound, and also that of each of the component parts, in order to evaluate whether one of the components acts as the head of the compound, and what grammatical relationship holds between the parts. Taking further inspiration from the Hindu grammarians, Bloomfield (: –) articulates a taxonomy of compounds keyed to Sanskrit models of  (headed) compounding, including  (actor-director {A and B}; Sanskrit dvandva), and  (night school {A modifies B} or {B modifies A}; Skt. tatpuruṣa), and also  (headless) compounding ( graybeard ‘old man’ {individual having/characterized by ‘AB’}; Skt. bahuvrīhi). Compounds are cross-categorized by Bloomfield as syntactic or asyntactic, depending on whether the order of the constituent elements is like or unlike a possible order in which they could appear if they were in a garden-variety syntactic construction of the language in question. In truly syntactic compounds, the main way to be sure of morphological rather than syntactic composition is the presence of the marks used by the language to distinguish compounds, in English, so-called compound stress gréen hòuse.

.. The treatment of inflection Bloomfield attempts to define inflection not with primary reference to grammatical (vs. lexical) categories, but rather by referring to “layers” of morphology: an inflectional marker “usually” imposes partial or total  on a form, meaning that an inflected word “can figure as a constituent in no [larger] morphologic constructions or else only in certain inflectional constructions” (: ). His second attribute of inflection, as distinct from word-formation, is parallel groupings of corresponding forms within word-classes, conventionally termed . Paradigmatic relations among forms allow not only for quick learning and the predictability of new or previously unknown forms based on one given form. They also permit the representation of partially or fully suppletive forms as belonging with their basic forms in spite of phonological dissimilarity. Where a form is missing, Bloomfield suggests that the linguist create a suitable hypothetical form for descriptive purposes. In a logical but controversial move, Bloomfield endorses the position that the inflectional paradigm for a class should be defined with respect to the most highly differentiated example, with the corollary assumption that simpler paradigms in the class exemplify systematic, perhaps even massive, homophony (: ). Harris’s () discussion of discontinuous morphemes and the alternants thereof is focused on the shapes and distribution of agreement markers, and thus on inflection, but

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

the interest is on the behavior itself, that the “occurrences . . . always appear together” whether or not the corresponding inflections are phonemically identical, and so it could simplify the description if the discontinuous morpheme were counted as a single unit, and distributed within a defined syntactic construction (: ). This method seems not to have been taken up by many linguists, but it is a forerunner of a range of feature percolation, feature-sharing, and copying maneuvers in use today.

. I

.................................................................................................................................. Interface theorizing as such is a relatively later phenomenon, and so one is obliged to search for some relevant traces in the literature that show up while linguists were engaged in making other points. Harris wrote two articles, ‘From Morpheme to Utterance’ () and ‘From Phoneme to Morpheme’ (), one on either side of his major textbook. In the former, it is Harris’s contention that structure may be identified via repeated application of the  , in which first single morphemes, and then morpheme sequences, are substituted into syntactic frames in order to determine which position classes exist in a given language, and which morphemes and sequences are structurally equivalent. By working progressively upward in size, questions of internal versus external structure of constituents are handled as a by-product of the method, and the result is a picture of the morphological and syntactic structure. In light of his further work and questions raised in the years leading up to the publication of the  article, it was necessary to account for the knowledge of which morphemes existed in the language, a point merely assumed as the starting point for the first article (see §..). Whether this progression entails or avoids the issue of interfaces in grammar is open to interpretation.

.. Morphology–lexicon interface Sapir ( []: ) warns that “[t]he linguistic student should never make the mistake of identifying a language with its dictionary.” The encouragement is always to look beyond the size and etymological pedigree of the word list to the insights that the words in the lexicon provide through their usage patterns with respect to the structural tendencies of the language in question. Bloomfield’s conceptualization of the constitution of the lexicon changes within the very pages of Language. At first the lexicon is comprehensive, and not limited to full words only: “The total stock of morphemes in a language is its lexicon” (: ). As the morphological discussion develops, however, the lexicon becomes less of a wondrous word-hoard and more of a cluttered closet: “The lexicon is an appendix to the grammar, a list of basic irregularities” (: ). Included in this patchwork of the unpredictable are presumably meanings, grammatical categories, and ultimately anything arbitrary. In this way, it is perhaps more populated than the definition implies at first, but the intention to concentrate on the regular and the general is clear. The borderline between words and phrases in terms of lexical listing is troubled by de-compound secondary derived words, for example, old-maidish, gentlemanly (Bloomfield

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





: ), noted above, and also so-called -, for example, jack-in-the-pulpit, devil-may-care () which, although they follow existing syntactic patterns, are frozen and unmodifiable and therefore words in their own right, not phrases.

.. Morphology–phonology interface Trubetzkoy (: –) surveyed what he felt to be unscientific use of mor(pho)phonemes from antiquity through Indo-Europeanist theory. He takes the important step of teasing apart three distinct senses of the term (): . the study of the phonological structure of the morphemes; . the study of combinatory sound changes that take place in the morphemes in morpheme combinations; [and] . the study of sound alternation series that fulfill a morphological function (Trubetzkoy : ). The synchronic focus of this list is most appropriate for Structuralist use, and if one considers the segregation of units and patterns in a descriptive grammar, these three types of phenomena pertain to different domains of linguistic structure, proceeding from primarily phonological to primarily morphological. Trubetzkoy closes his remarks with a hopeful comment that a serious investment by the community of linguists in the study of mor(pho)phonology could improve the results of language typology, because “by reason of its central position in the grammatical system it is best qualified to furnish a comprehensive characterization of the peculiarities of each language” (: ). With his background in Sanskrit, Bloomfield describes  processes, special sound patterns that correlate with morpheme and word boundaries (: ). His treatment includes contracted (enclitic) forms of auxiliary verbs, negation, and conjunctions in English, French  (consonant linking) with a following vowel-initial word, and the initial   of the Celtic languages (: –). Each of these three phenomena has a distinct function in the grammar, and the conditioning environments are not easily comparable if considered in detail. They all do, however, crucially involve soundpatterns at juncture points, and so fall under the purview of sandhi.

.. Morphology–syntax interface Once again, Harris’s () discussion of agreement in terms of discontinuous morphemes associated with syntactic constructions is relevant. The mechanism of association and/or distribution is not entirely clear, but it would seem to be most readily handled within the “Classified Morpheme List” as combinations of specified alternants and specified environments (Harris : , fn. ; ). Bloomfield’s brief discussion of clitics in French and English is focused primarily on the elements’ lack of phonological independence; questions of syntactic placement are not addressed (: ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

.. Morphology–semantics interface The issue of linguistic meaning in Structuralist linguistic analysis is contentious. Bloch and Trager () state plainly that “meanings are hard to define at best” () and that creating even approximate definitions “lies outside the scope of linguistic method . . . , which is concerned solely with the linguistic symbols themselves” (). In the interest of keeping the description of a language independent of hypothetical, mentalistic, and not directly observable phenomena, Bloomfield () sought to set aside considerations of meaning in the determination of linguistic units and their respective distributions. Far from denying the existence or relevance of meaning in the consideration of language as a whole, Bloomfield held that linguists were likely to import their own internalized, culturally specific, and perhaps even idiosyncratic meanings into the categorization of units in the language to be described (: –). A scientific linguistics demands empirical observation, and so Bloomfield chose to represent the general communicative exchange, for expository reasons, in terms of Behaviorist stimulus–response chains (: –). In disciplinary humility, and indeed in practicality, Bloomfield endorses the ban on linguists’ psychologizing the experience of speakers and hearers: “In the division of scientific labor, the linguist deals only with the speech signal. . . . The findings of the linguist . . . will be all the more valuable for the psychologist if they are not distorted by any prepossessions about psychology” (: ). Simple glosses of morphemes may be adduced for convenience, but ultimately any references to linguistic meaning are made as short-cuts, and not as crucial points in a proper analysis. What a morpheme is said to mean as far as descriptive practice is concerned is couched as an assumption, namely that particular linguistic forms are considered, within particular speech communities, as having (in some sense) “a constant and specific meaning” (: ). Bloomfield saw the statement of meanings as “the weak point in language study” (: ), and therefore recommended as a principle that “linguistic study must always start from the phonetic form and not the meaning” (: ). Harris () acknowledges that his strict distributional analysis may be seen as an alternative to a potentially more time-efficient breaking up of utterances on an intuitive, meaning basis (: , , fn. ), but he adds that the distribution-only method can provide decisions on problem cases such as the immediate constituency of flight with respect to flee + t rather than fly + t, given sight as connected to see, that is, cases for which “meaning considerations might not be decisive” (: , fn. ). If one method (distribution) is thought to be sufficient in itself, whereas an alternate method (meaning) requires recourse to the former method to decide problem cases, then the former is to be preferred on scientific grounds, no matter how onerous or tedious in practice are the “cumbersome but explicit procedures”, as Harris would have them (: ). Harris, like Bloomfield, laments the resistance of semantics to formal treatment, at least at the time of his writing, and so sets aside the analysis of meaning in language for a later time (: –, , fn. ; cf. Lenci : –). Returning to Bloomfield (), however, an occasionally cited but little-exploited area in his proposed linguistic structure did in fact address units of meaning. The meaning aspect of a  (smallest unit of lexical form) is a , a hypothetical “constant and definite unit of meaning, different from all other meanings, including all other sememes, in the language” (: ). The meaning associated with a  (smallest unit of grammatical form) is an , that is, the significance conventionally associated with particular orders, intonational patterns, phonetic modifications of forms, or selection

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





of (co-)consituents from particular form-classes (e.g. morphosyntactic subclasses under government or agreement) (: –). In this way, a fully analyzed complex form can be said to correspond to a complex meaning based on sememic and episememic contributions of the constituent forms, but this all remains at the abstract and unformalized level, filling boxes in a typology of technical terminology. In all fairness, however, it is systematic, and it is more than Bloomfield had foreshadowed at the outset of his book. Nida, in his capacity as field linguist, was less pessimistic about the value of statements about linguistic meaning. In Nida (: ), he responds to those who say that linguistics was not yet ready to undertake semantic study with three distinct lines of argument: () meaning is effectively the “elephant in the room” for language study, and it is incumbent upon linguists to do something about it; () the very definition of morpheme as “a minimal unit of phonetic-semantic distinctiveness” obliges the structural analyst to consider semantics; and () the fact that communication works is prima facie evidence that there is something “there” to be described. In the interest of breaking through the silence as to linguistic meaning, Nida takes up and elaborates the semantic units raised in Bloomfield (), seeking in the concrete ethnological observations and informant work in the field-milieu the evidence needed to discern definitions of morphemes, and in turn, of complex words. The relevant distribution from Nida’s perspective is what he calls the  , comprising both the linguistic and non-linguistic contexts of use (: ; cf. Firth ). Nida’s full typology of terms is perhaps daunting, but the framework is a model of internal consistency. A  is any minimal feature of meaning, to be related as  under a , on analogy with pairs like phone/phoneme and morph/morpheme. A  is a seme derived from the linguistic context, while an  is a seme based on the ethnological context. The subordinate notions here are correspondingly  and , relatable to  and , respectively. These are then integrated with , the conventional meanings that grammatical constructions and configurations may bear (spawning  and ), and ultimately, given that a single linguistic form may be associated with an episeme and one or more linguisemes and/or ethnosemes, Nida posits the  (et cetera) to render such composites of the basic meaning units. While the proliferation of terms becomes unwieldy, perhaps, the semes offer coherent solutions to morphemic problems: since semes may be overtly or covertly expressed on the level of form, in cases where an aspect of meaning has no overt expression, one may speak of a covert seme without appealing to a zero or phonetically null morpheme. Furthermore, this approach allows for lexical semantics to be derived from the union of sememes, as opposed to something like the intersection, or common denominator, of sememes, a potential concept which may or may not in fact be uniquely identifiable, and which in any case is something less than the meaning of the word or morpheme (: ). Despite the attractively subtle, principled distinctions that such a framework might offer, it was the concrete appeal of the sign-like classical morpheme that won the day as the unit of choice in Structuralist morphology (and beyond), despite its empirical shortcomings.8 8 Martinet ( []: –) proposed a concept similar to the morpheme, but sought to bridge the issue of free versus bound distribution. His concept of  is a minimal sign, a basic unit such that each word is composed of one or more monemes. If one wishes to introduce the dimension of lexical listing, Martinet suggests that  for roots/bases can oppose  for affixes, but warns against the loaded use of  for root elements, since it might be taken to imply that non-roots are without semantic value.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. O 

.................................................................................................................................. Historical and comparative linguistics clearly served as a foundation from which the Structuralist movement grew, and against which background it imagined a broader domain, of momentary “slices” across the flow of language change. The growth in formal study of sociolinguistics and psycholinguistics, however, is to a great degree subsequent to the height of Structuralist activity, but in light of this sequence, many of the assumptions concerning linguistic structure in these subfields were informed by the results of Structuralist inquiry. While computer-aided methods that permit the compilation and analysis of rich linguistic corpora may seem a distant shore in comparison to the limited technology available to descriptive linguists in the mid twentieth century, there was nevertheless a demonstrated interest in mathematical and statistical methods, and the starting points of both machine translation and probabilistic modeling were already in view. The present section surveys these affiliated threads in turn.

.. Variation within and across languages Bloomfield () addresses the issue of variation associated with social distinctions of region, social status, and gender, as well as language contact-induced alterations synchronically in his discussion of speech communities (–), and diachronically in connection with the wave theory of Indo-European sub-family contact effects (–) and also with the investigation of dialect geography (–). Sapir ( []) wondered at the array of morphological types of languages, especially in light of his multi-dimensional scheme that ranged from concrete to abstract on both the formal and conceptual levels (). The range of variability apparently seemed daunting: “We know in advance that it is impossible to set up a limited number of types that would do full justice to the peculiarities of the thousands of languages and dialects spoken on the surface of the earth” ( []: ). With that in mind, one might look to Joos’s () summary of the descriptive linguistic assumption that “languages could differ from each other without limit and in unpredictable ways” (). Joos attributes this (controversially; see Hymes and Fought : –) to Boas, but Sapir ( []) himself forestalls a slide into anything-goes chaos: “Does the difficulty of classification prove the uselessness of the task? I do not think so. It would be too easy to relieve ourselves of the burden of constructive thinking and to take the standpoint that each language has its unique history, therefore its unique structure” (). Description and comparison must proceed, and theorizing too, in dialogue with these. From the perspective of Harris (), interested in developing comprehensive descriptions of synchronic language states, variation was a “problem” to be acknowledged, but to be held at the margins by means of diacritic marks, keyed to footnote annotations (, fn. ). Segregation of wholly dialect utterances from a corpus would be legitimate for descriptive purposes, as would similar handling of utterances that show language-mixing (: ). Differences of style, however, were thought to be subject to a “great degree of structural

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





identity . . . within a dialect” and so were “generally disregarded” (: ), that is, not held out on a pre-emptive basis for special treatment or comment. In a sense, Hjelmslev ( []: –) explicitly idealized the object of description by fixing a -, a closed ‘text’ that was repeatedly and cyclically analyzed in the pursuit of language-specific  and logical/distributional  among said classes. By not importing a priori notions and categories, the so-called  structure of a given language-state might be exhaustively ascertained and appropriately described.9

.. Language change Although ostensibly and strategically branching away from the limitations and assumptions of the Neogrammarian framework, the Structuralist linguists did not negate the relation of the synchronic to the diachronic; the dependence of accurate diachronic work on high-quality synchronic descriptions was in fact emphasized as evidence in favor of the descriptivist movement (Bloomfield : ). Nida (: ) affirms the significance of detailed synchronic descriptions for both historical and comparative work. As stated previously, many of the prime figures in the Structuralist movement were trained and published historical linguists. Jakobson (: –; : –) and Trnka et al. (: ) represent the Prague view as holding—contrary to de Saussure’s opposition of static synchrony and dynamic diachrony—that a language remains a system throughout its diachronic changes. Whereas Jakobson () portrays the dynamic of regular sound change and analogic counter-change in the following terms: “The grammatical and the phonemic structures mutually readjust each other. The relative internal autonomy of both patterns does not exclude their perpetual interaction and interdependence” (: ), Trnka et al. highlight the functional aspect of the co-operation between analogical change in morphology and regular sound change with respect to maintaining the communicative capacity of a language over the long term (–).

.. Frequency and productivity Bloomfield () considers frequency in general, diachronic terms, outlining the factors favoring or inhibiting the frequency of particular forms, for example, homonym avoidance, use by prestigious groups, and the operation of culturally based taboo terms (and their synonyms and/or homonyms) (–). The direct correlation of frequency to the retention of irregular morphology is likewise diachronic (: ), but the overarching issue of relative frequency between and among rival forms is an ongoing issue in any synchronic moment. Productivity as a morphological concept is not theorized in detail across the Structuralist literature, but rather is something that is fluid and generally underdocumented (: –). It is registered in fluctuations in frequency over time, and it can be evaluated at any given time, albeit indirectly, through tests among native speakers about relative degrees of acceptability (Nida : ). In this period, however, the work of George Zipf (: –) laid out a statistical account for differential productivity of words (and by extension, morphological patterns) with 9

See Togeby () for the application of the Glossematic methods to (a state of) modern French.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

reference to inverse correlations between corpus-frequency and both size (number of phonemes or syllables) and semantic complexity among the words available in any given language in a given time and region. Zipf observed, within and across a number of genetically distinct languages and a selection of corpus genres and sizes, a strong preference for reusing a small group of phonologically and semantically “simple” words at a rate much higher than chance would predict (e.g. : ). Amidst the more formalist climate of the time in the United States, Zipf distinguished himself by invoking thoroughly functional explanations, couching his discussion in terms of a broad “ecological” tendency in human behavior: namely, to rely most heavily on a compact, flexible set of familiar “tools” (here, words) and basic “permutations” thereof (constructions) for everyday activities. Membership in the “handy” set of linguistic tools is of course subject to cross-linguistic differences and diachronic changes, but is constrained at the systemic level by the inertia of habit and convention (: –). Hockett () criticized Harris’s () distributional enterprise as being insufficiently informed by mathematical principles and as remaining too dependent on meaning distinctions in the discovery of morphemes. Partly in response to such a reception, Harris elaborated a revised procedure as outlined in his aforementioned article ‘From Phoneme to Morpheme’ (), aimed at the automatic parsing of morphemes via a rigorous consideration of phonemic transitional probabilities, and thus his method crucially depends on frequency patterns in a transcribed corpus. He does not theorize the “why” of any particular distribution, but does indeed posit that formal patterning alone can be sufficient to reveal the morphological structure of utterances, without recourse to meaning (: –). In a distinct context, Jakobson (: –) brings out the issue that phonemic patterns can be subtly keyed not just to syllable and word structure, but to lexical strata, syntactic category, or grammatical environment, so that Harris’s hoped-for automatic algorithm would instead seem to require a very subtle implementation. Nida (: , fn. ) briefly introduces frequency within distribution patterning and native speaker polling as a measure of whether homophonous elements are to be described as semantically related synchronically, independent of etymology or of non-native speakers’ intuitions about which forms are “legitimately” to be connected. Trnka et al. (: ) see frequency studies as having heuristic value for testing qualitative claims, and furthermore suggest that linguistic typology is a sub-branch of quantitative linguistic analysis, rather than seeing frequency measures as one of the tools that a typologist may draw upon.

.. The role and relevance of experimental methods Bloomfield () respected the more scientific threads in psychology, as witnessed by his tendency to couch discussions of basic mental activities in Behaviorist terms of stimuli and responses, but he is the first to admit that psychological research is best left in the hands of psychologists and not imported into the linguist’s model-building (vii, ). That said, Bloomfield reported with interest the laboratory results showing the semantic (at least) priming of the word five by the word four, but this was considered in the diachronic context of phonological contamination of the respective initials (PIE *p in ‘’ versus *kw in ‘’), rather than as an endorsement of a turn toward the experimental paradigm (: –). In

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi





that same passage, Bloomfield seemed unconvinced that slips of the tongue are of any particular interest or subject to scientific study (: ). From his later vantage point, Fries (: –) reported the rise in interest among linguists and psychologists with respect to psycholinguistic investigations through the s and into the s as a developing conversation across the disciplinary divide.

. C 

.................................................................................................................................. The Structuralist movement provided the foundation for many of the common analytical concepts and methods used in contemporary linguistics. The unfortunate fact is that their ideas circulate today without attribution, as if they were common sense. Martinet ( []) already noted that “the principles of phonological analysis have long become common property” (–), indicative of the withering visibility of the scale and significance of earlier contributions. The key strengths of Structuralist morphological theory are its explicit and systematic analytical procedure, its non-reliance on intangible factors in making categorizations, and its willingness to reimagine traditional conclusions. The controversy over the adequacy of the morpheme as a concept versus its practical convenience for description continues to bear directly on live arguments today. J. R. Firth, the leading figure in the London School during the period in question, took note of examples of foundational figures in language research who were slipping into disregard, and he called upon his contemporaries to join him in adopting a more humble and respectful stance toward the efforts of linguistic forebears: “ . . . our antecedents are older and better than we think” (: , citing Abercrombie ).10 There is no wholesale return to the Structuralist theory. It is nevertheless with us at nearly every turn.11 Perhaps the best outcome for the morphological work of the Structuralist movement would involve both a greater public familiarization with the thinking of some of the initiators of the discipline of modern linguistics and a rediscovery and re-examination of the vast amount of data that has over time filtered down (in partial and diluted form) into (often uncredited) data sets. 10 When Firth approached morphology, he did so in Structuralist fashion, noting that “nominative” case, for example, would necessarily have different “meanings” in different languages by virtue of how many other case values, and which ones, were present in the respective systems (: ). Firth’s own interests, however, apparently tended elsewhere: “morphology as a distinct branch of descriptive linguistics has perhaps been overrated, owing to its very different place and value in historical linguistics” (: ). 11 Firth () stood out in his unrelenting support for the development of a semantic-pragmatic dimension based in suitably characterized “contexts of situations” alongside what he felt to be the unnecessarily limited scope of Structuralist analysis (: –, –; cf. Firth : ). Although Firth (: ) recognized the daunting prospect of abstracting and validating any usable taxonomy of such contexts, he refused to simply set semantic analysis out of bounds as American Structuralists in particular had done. With the continuing development of computational resources and the growth of corpus linguistics, however, some of the labor-intensive data collection and distributional analysis has proven not merely tractable, but in fact leading tools in corpus-based methodology, whether or not patterns discovered in such studies may reliably, or even provisionally, be taken as representative of cognitive structures (see, e.g., Lenci  on corpus-based distributional lexical semantics).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. F  Bloch and Trager (: ch. ). This compact text situates the basics of morphological analysis between phonemics (i.e. all of phonology) and syntax. The former provides the basic methodology for identifying formal units, and the latter serves to mark the cut-off point, when the constructions involve only free forms (regardless of internal morphological complexity). For reasons of accessibility and space, the focus is heavily on English data (). Bloomfield (: chs –). The classic textbook provides wide typological coverage, and is a source for many of the illustrative examples used to this day in introductory texts and courses in linguistics. The recommended reading begins with the chapter entitled “Syntax” because it discusses morphosyntactic issues of case government and marking, and of person/number/gender agreement. “Morphology” provides examples of formal operations (affixation, reduplication, prosodic shift, etc.) and an overview of inflectional and lexical relationships that may be signaled morphologically. In particular, the setting up of (sometimes abstract) underlying forms for use in the derivation of surface-forms is worked through concisely for readers. In “Morphologic Types”, Bloomfield surveys several advanced topics, including compounding, allomorphy, phonaesthemes, root structure constraints, and classifying markers. Harris (: chs –). The initial chapter here prescribes a procedure that “divides each utterance into the morphemes which it contains” (). Framed first as derivable from the overall distribution of phonemes in a language, different patterns, clustering, and interactions with suprasegmentals are thought to reveal the extent of the morphemic units. Issues of acquisition, frequency, and semantics are correspondingly forestalled, although these are mentioned as being of interest (in chapter appendices) as potential checks on one’s analyses. Techniques are given for arriving at the optimal balance between too many and too few morphemes and morpheme-classes, owing to allomorphy and morphophonology. Based then on the identification of substitution classes and sequencing constraints, Harris demonstrates the development of morpheme template diagrams. Nida (). The singular focus of this text is amplified in its broad purpose: not only is it an introduction to linguistic morphology both conceptually and analytically, it also serves as a guide to developing (or even practicing) field linguists who are working on previously unanalyzed languages. Nida alludes to approaches beyond American Structuralism (vi), but notes the practical, pedagogical intent of the book as a limit to variety. The rich array of data is particularly valuable by way of illustration and as teaching material (–). Also to be recommended is the meta-level progression of stages in working with data, from field-based collection and elicitation to detailed analysis of the data collected. As a mentor for students, Nida presents and critiques model tables of contents used in several leading published descriptive grammars (e.g. Sapir on Southern Paiute, Harris on Modern Hebrew, Hoijer on Chiricahua Apache, Hall on French, and Hockett on Potawatomi). Sapir ( []: chs –). Chapters  and  share the title “Form in Language”, but they diverge in subtitle: namely “Grammatical Processes” (formal operations) and “Grammatical Concepts” (lexical and grammatical relations), respectively. The concept “morpheme” is not indexed, and “affixation” is but one grammatical process among many. Indeed, for a consistent picture of an IP approach to morphology, these chapters are essential. Chapter  first reviews established morphological types, then interrogates them carefully to arrive at something more coherent and insightful than (rare or non-existent) extreme types with a broad, variegated middle. While not conclusive even by his own estimation, Sapir’s process and discussion are provocative.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

   ......................................................................................................................

  

T chapter covers the history of morphological theory within Generative Grammar from its emergence until the revival of the mid s. The seminal publications in this period are Chomsky (), Lees (), and Chomsky (). Most work on morphology in a generative framework in this period can be seen as an elaboration of or reaction to these works. The emergence of Generative Grammar can be seen from at least three different perspectives. At the start, the most prominent perspective was that of the formalism. For many linguists at the time, the most striking feature of Chomsky () was that it introduced the new mechanisms of generative rewrite rules and transformations. In Matthews’s () account, this was a central aspect of the Chomskyan revolution and in his analysis the fact that the mechanisms had already been introduced before in the context of Post-Bloomfieldian linguistics weakens the case for calling the rise of Generative Grammar a revolution. A second perspective is more sociological. Newmeyer () highlights this aspect, noting the fact that Generative Grammar was very popular among students in the s and these students graduated in time to occupy the new posts that became available in many American linguistic departments from the late s. Newmeyer argues that the fact that Post-Bloomfieldian linguists were increasingly outnumbered by a dynamic group of young generative linguists in linguistic departments contributed significantly to the sense of revolution and renewal. Ten Hacken () represents a third perspective, concentrating on the contrast between the view of the nature of language and of linguistic theory between Post-Bloomfieldians and Chomskyan Generative Grammar. In this perspective, the revolutionary innovation was the idea that linguistic theory should describe a speaker’s competence rather than the regularities of language use as evidenced in a corpus. While Newmeyer () and Matthews () also mention this aspect, they do not give it the same weight. In studying the treatment of morphology in the early stages of Generative Grammar, it is important to consider both the first and the last of the perspectives mentioned above, that is, the use of certain mechanisms to account for morphological phenomena and the interpretation of these accounts in the general context of a theory of language. The chapter starts with the proposals for morphology that were part of the original presentation of the

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  

new framework of Generative Grammar (§.). In §., we will consider some developments in Generative Grammar that changed the framework in ways that made the earlier analyses unstatable. There were two types of reaction to these developments. One was based on Chomsky’s () Lexicalist Hypothesis, presented in §.. Two elaborations of this approach are the subject of §.. An alternative route was chosen by Generative Semantics, treated in §.. Finally, §. formulates some generalizations over these approaches and §. discusses their legacy.

. M      G G

.................................................................................................................................. In Post-Bloomfieldian linguistics, two types of description were distinguished, phonological and morphological. Thus, Harris () divides his guide for linguistic analysis into preliminaries, phonology, and morphology. This means that the label morphology includes what we tend to call syntax. Chomsky () also concentrates on this domain, but refers to it as syntax. An important difference between the Post-Bloomfieldian and the Chomskyan approach is that the former builds up the linguistic analysis from the smallest units, whereas Chomsky uses rewrite rules starting with S.1 Therefore Chomsky () can be said to integrate morphology into syntax rather than the reverse. As an illustration of his system of Transformational Generative Grammar, Chomsky () gives a grammar of a fragment of English. In the first instance, Chomsky (: ) gives rewrite rules such as the ones in (). ()

a. b. c. d. e.

Sentence ! NP + VP NP ! T + N VP ! Verb + NP T ! the N ! man, ball, etc.

The rules in () are of two types: (a–c) create structure, whereas (d–e) introduce lexical material. In the earliest stages of Generative Grammar, there is no lexicon, so that lexical items are generated by rewrite rules. In the elaboration of Verb in (c), Chomsky (: ) introduces rules such as the ones in () and (). ()

a. Verb ! Aux + V b. Aux ! C (M) (have + en) (be + ing) (be + en)

()

a. Af + v ! v + Af # b. + ! # except in the context v — Af

1

In the context of formal grammars, S is the start symbol. In grammars for natural languages, it was reinterpreted as sentence.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



The rules in () are of the same type as in (). In (b), C stands for agreement and tense, so it can become the singular ‑s, the plural Ø or the past tense ending, and M stands for modals.2 The brackets indicate optional elements. The last three of these introduce the auxiliary and the corresponding ending for the perfect, the progressive, and the passive, respectively. In this way, morphemes that logically belong together are generated together. However, they are in the wrong order, because the affix that depends on the preceding have or be has to surface as a suffix to the verb following have or be. The rules in () are transformations. Instead of creating additional structure, they change the structure generated before their application. In (a), Af stands for any affix and v for any verb (V, M, have or be). This rule reverses the affix and the verb, so that they are put in the right order. Inflectional affixes are generated by (b) together with the auxiliary they are a logical unit with, and then suffixed to the following verb (‘affix hopping’) by (a). Rule (b) introduces word boundaries #. The default assumption is that any morpheme boundary + is also a word boundary, but this is not the case when the + occurs between a verb and a following affix. Whereas Chomsky’s () example of affix hopping can be seen as a prototypical case of inflectional morphology, Lees () proposes an analysis of nominalization, which can be seen as a typical case of word formation. In present-day linguistics, nominalization is used to refer to a word formation process and to the resulting nouns (e.g. Bauer : –). Lees (), however, uses nominalization as the name for the processes applied in (). ()

a. What he did was obvious. b. He’s the seller of the car.

In (a), there is no noun in the entire sentence, but Lees (: –) analyses What he did as a phrase that is turned into an NP in order to become the subject of the sentence. This is what he calls nominalization and it is brought about by a series of transformations. In (b), seller is an agent noun, so that it would qualify as a nominalization in the modern sense. However, Lees’s (: –) analysis does not derive the noun seller from the verb sell, but the sentence (b) from (). () He’s selling the car. In the absence of a lexicon, it should not be surprising that there is no nominalization process producing seller. Base rules such as () and () produce a sequence of morphemes with their grouping into constituents. This is the Deep Structure.3 Transformations turn these morphemes into surface sentences. It would in principle be possible to generate sell and ‑er as adjacent morphemes at Deep Structure, but then we would lose the generalization that sell has an object, the car, both in (b) and in (). The only way to state the Chomsky (:  fn.) excludes first and second person from consideration. The term Deep Structure is introduced by Chomsky (: ) for the structure at the interface between the rewrite rules of the base and the transformations. Before the introduction of recursion in the base, the structure at this interface was not a conventional tree structure, but the interface existed in the sense that the rewrite rules were not interspersed with transformations. Therefore, using Deep Structure in this sense in reference to work dated before  is not unjustified. 2 3

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  

relationship is to introduce ‑er in a transformation. This transformation can then at the same time account for such changes as the introduction of of in (b). Lees () is not interested in forming the word seller on the basis of sell, but in generating sentences in which seller occurs and relating them to the underlying sentences with the verb sell. As he states it, his aim is to provide ‘a description of the individual and, in part, ordered, rules which would be required in an English grammar to ensure the proper generation of all grammatical sentences characterized by various kinds of nominal-expression components’ (: ). Apart from nominalizations of the types illustrated in (), Lees () also gives an account of English compounds. The approach to compounds can be illustrated by the derivation of assembly plant. Lees (: ) gives the sequence in () to show how this compound emerges as the result of the application of transformations to Deep Structure sentences. ()

a. (i) The plant is for Na. (ii) The plant assembles autos. b. The plant is for the plant’s assembly of autos. c. . . . plant which is for the plant’s assembly of autos . . . d. . . . plant for the plant’s assembly of autos . . . e. . . . plant for the plant’s+Pron assembly of autos . . . f. . . . plant for the assembly of autos . . . g. . . . plant for auto assembly . . . h. . . . auto assembly plant . . . j. . . . assembly plant . . .

At first sight, the derivation in () seems incredibly roundabout, as it first introduces material (e.g. which is in (c)), which is subsequently deleted. However, much of the derivation can be explained as a consequence of three considerations. The first is that the meaning of the final result is expressed at Deep Structure. This explains that in (a, ii) autos is introduced, even though it is not formally represented in (j). However, an assembly plant is a factory for assembling cars, not photocopiers or coffee machines. Therefore, this element of the meaning must be introduced in Deep Structure. A second consideration is that the Deep Structure consists only of simple sentences, so-called kernel sentences. This is why (a) consists of two elements, one expressing the purpose of the factory and the other the action. The first transformation combines these two sentences into one. Finally, individual transformations should be minimal in scope so that they can be reused in different derivations. Most of the intermediate steps in the derivation are either possible Surface Structures or can be the basis for a different derivation path leading to an alternative Surface Structure with (more or less) the same meaning. The meaning of a sentence is based on the interpretation of its Deep Structure. In this way, the grammar accounts for the similarity of the meanings of different Surface Structures derived from the same Deep Structure. In the first period of Generative Grammar, morphology was not recognized as a separate domain. The treatment of inflection by Chomsky () and of word formation by Lees () is part of the treatment of sentences. Although (b) is a rule introducing word boundaries, it does so after all morphemes have been put in the right place. Base rules and transformations do not refer to words. They generate sentences consisting (ultimately) of morphemes.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



. T        

.................................................................................................................................. In the early s, two main developments can be identified that changed the attitude to morphology, viz. the introduction of the lexicon and the formulation of general constraints on transformations. Both of these were triggered by insights that were largely independent of morphological considerations. The concept of a grammar consisting of rewrite rules was originally developed in the context of mathematical linguistics. Chomsky () summarizes the state of the art at the time, to which his own research had contributed significant parts. In mathematical linguistics, the alphabet of terminal symbols is usually {, } or {a, b}, that is, a language is defined over a vocabulary of two items. An example is the grammar for the language consisting of all strings anbn (n  ), which can be written as consisting of the two rules S ! ab and S ! aSb. Rewrite rules contain terminal and non-terminal symbols and the grammar ultimately replaces all non-terminal symbols so that the string consists of terminal symbols only. Applying these formal grammars to natural languages, Chomsky () adopts a system that generates terminal symbols by means of rewrite rules such as (d–e). However, the vocabulary of natural language is not only much larger, but also much more complex than that of formal languages typically studied in mathematical linguistics. The rule in (e) rewrites N, but ends in ‘etc.’, because a complete list of nouns in English would be very large. A single rule of this type is also inadequate, because there are various classes of nouns, whose distinctive properties have a direct impact on their syntactic behaviour. The feature [ count], for example, determines whether indefiniteness is expressed with an indefinite article or a bare noun, as illustrated in (). ()

a. b. c. d.

Anna has a car. *Anna has car. Anna has money. *Anna has a money.

Similarly, the feature [ human] determines, for instance, whether the relative pronoun should be who or which, as illustrated in (). ()

a. Anna has an uncle who lives in Spain. b. Anna has a house which was built in .

The natural development of a rule such as (e) would then be to have rewrite rules that generate the different values of these features. However, the problem is that all combinations of [ count,  human] occur, as illustrated in Figure .. If we now use rewrite rules that impose a hierarchy between the two features, we will no longer be able to make all necessary generalizations. Suppose we take the approach in ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



   [+ count]

[– count]

[+ human]

uncle

people

[– human]

car

money

 .. Cross-classification of [ count] and [ human]

()

a. b. c. d.

N ! N [+ count], N [– count] N [+ count] ! N [+ count, + human], N [+ count, – human] N [– count] ! N [– count, + human], N [– count, – human] N [+ count, + human] ! uncle, friend, woman, etc.

In (a), we get the right division of classes to account for the generalization in (), but then the classes we need to state the generalization in () do not exist. We have to combine parts of what is generated in (b) and (c). There is no solution to this problem within a system of rewrite rules. If we reverse the order of the hierarchy, () is fine but we can no longer state the generalization for (). If we avoid the hierarchy by rewriting N directly as N [ human] alongside (a), we can no longer state rules such as (d). Chomsky () solves this problem by introducing the lexicon. As summarized in Chomsky (: ), he adopts a system in which the base component that generates the Deep Structure consists of a categorial component (rewrite rules) and a lexicon. Transformations then produce the Surface Structure. Deep Structure is the basis for semantic interpretation and Surface Structure for phonological interpretation. The problem that gave rise to the introduction of the lexicon is independent of morphology. With the lexicon as a new component alongside rewrite rules and transformations, the distribution of tasks between these components had to be reconsidered. Another issue that emerged in the early s is the power of transformations. Chomsky () discusses the mathematical class of natural language grammars and concludes that ‘English is not a finite state language’ (: ). It is always assumed that the base rules are a context-free grammar, but transformations increase the power of the grammar. In the early stages of Generative Grammar, it is not determined by how much they should increase the power. The derivational sequence in () shows a variety of operations performed by transformations. In (b) two kernel sentences are combined. In (c) the overall syntactic category is changed. In (d), which is is deleted. In (f ) the plant’s is deleted. In (g) and (h) the order of elements is changed. In (h) for is deleted. Finally, in (j), auto is deleted. The complexity of the derivation is motivated by the fact that the Deep Structure at the start of it reflects the meaning, whereas the Surface Structure at the end is what is pronounced. For the hearer, the Surface Structure corresponds most directly to the input and the task is to reconstruct the Deep Structure in order to understand the expression. It is natural, therefore, to impose conditions on transformations in such a way that this task is possible. According to Newmeyer (: ), Chomsky (: –) is the first proposal to restrict the operation of transformations in such a way that no material is deleted that cannot be reconstructed. Katz and Postal (: –) elaborate this more explicitly as the

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



recoverability condition on deletion or substitution operations in transformations. It is not a problem, for instance, that the plant is deleted in (f ), because there remains an identical word plant from which it can be recovered. In the case of the deletion of for in (h), it is a grammatical element that cannot be varied. Therefore, it can be named in the relevant transformation. However, there is no way auto can be retrieved from (j). Its deletion violates the recoverability condition. Although the effects of the recoverability constraint can be illustrated on the basis of the derivation of a compound, the constraint is not particularly oriented towards morphology. Katz and Postal (: ) mention the case of the imperative, where you and will are deleted. Other relevant phenomena include relative clauses and pronouns. The introduction of the lexicon and the recoverability constraint on transformations had a strong impact on morphological theory. The recoverability constraint made the statement of transformation rules such as those Lees () used for compounds impossible. The lexicon provided a new mechanism that opens possibilities for alternative theories to be stated.

. T L H

.................................................................................................................................. In the light of the developments outlined in §., Chomsky () proposes a new approach. The title Remarks on Nominalization clearly refers to Lees’s () title The Grammar of English Nominalizations. Chomsky () introduces a range of new ideas, including X-bar theory, but as far as morphology is concerned the main focus is on nominalizations, interpreted here in the same broad sense we encountered in §. for Lees (). Chomsky () addresses the distribution of tasks between the lexicon, the base rules, and the transformational component. As examples of nominalizations, he compares the constructions in (). ()

a. John has refused the offer. b. John’s refusing the offer c. John’s refusal of the offer

A general observation is that the more regular a formation is, the stronger the argument to treat it by means of a rule (transformation). Conversely, any irregularities point to a lexical treatment. Comparing the constructions exemplified by (b) and (c), Chomsky (: ) notes that gerundive nominals such as (b) ‘can be formed fairly freely’ from any subject-predicate expression, have a predictable meaning, and do not have a full NP structure. For the latter, it should be noted that refusing has a direct object and John’s cannot be replaced by an article. Nominalizations of the type of (c), however, are subject to a range of idiosyncratic restrictions. An example of such restrictions is (). ()

a. b. c. d.

John amused the children with his stories. John’s amusing the children with his stories *John’s amusement of the children with his stories John’s amusement at the children’s antics

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  

Although amusement exists, it cannot be used in the regular way, as illustrated in (c). As shown by (d), it is constructed with at and refers to the feeling the children evoke in John rather than to what John does to the children. The possibility of modification by an adjective, the impossibility of a direct object, and the range of possible determiners all point to an analysis of nominalizations of this type as Deep Structure NPs. Chomsky (: –) concludes that ‘the lexicalist hypothesis explains a variety of facts’ about nominalizations of the type in (c). He proposes that an item such as refuse appears in the lexicon without categorial specification and that ‘[f ]airly idiosyncratic morphological rules will determine the phonological form’ of the nominalization (: ). In his conclusion, Chomsky (: ) arrives at the idea that ‘derived nominals should have the form of base sentences, whereas gerundive nominals may in general have the form of transforms.’ Note that nominal here is not a noun such as refusal, but the entire expression in (c). The implication is that all of the elements in (c) are in the lexicon, so that they can be inserted at Deep Structure. The next question is how far the domain of items treated like this reaches. Chomsky (: ) gives the examples in (). ()

a. The book is readable. b. the book’s readability

It would be conceivable to derive readable in (a) by a transformation from read. However, if nominalizations are generally in the lexicon, as the discussion of (c) suggests, readability in (b) must also be in the lexicon. If readability is in the lexicon, we have to assume that readable is also in the lexicon. Therefore, by accepting the lexicalist hypothesis for derived nominals, we are bound to accept it also for a range of other derivational constructions.4 As Chomsky (: ) notes, it is hardly possible to come up with any positive evidence in favour of a transformational approach for (). As a limiting case he mentions the construction in (). ()

John’s refusing of the offer

Compared to (b) and (c), () is a mixed form with some properties shared with the former, others with the latter. Chomsky (: ) concludes that for these cases it is much less obvious that they fall into the domain of the lexicalist hypothesis. When we compare the approach to morphology by Chomsky () with the earlier proposals of Chomsky () and Lees (), we see that a number of choices have not changed. The focus is still not on morphology. Instead, the question is how to account for sentences. In the study of sentences, words are given a less prominent place than morphemes. This is obvious in the way nominalization is not considered from the point of view of how nouns such as refusal are formed, but rather of how phrasal expressions such as (c) arise. The main difference is in the use of the lexicon. Chomsky () argues for a significant role of the lexicon in the account of nominalization. In the context of the earlier works, no lexicon was assumed. The argument is based on a comparison of the lexicalist hypothesis with 4 Following Chomsky (), I use lower case lexicalist hypothesis when discussing the informal use of this expression. Only in later discussion did this become the Lexicalist Hypothesis as a leading principle of morphology.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



a transformationalist hypothesis and on a comparison of gerundive nominals with nominalizations in the narrower sense of (c). The conclusion is that a treatment in the lexicon is to be preferred for much of what we would now call word formation morphology. However, gerundive nominals are still treated transformationally. This suggests that inflectional morphology is not meant to be in the scope of the lexicalist hypothesis.

. E   L H

.................................................................................................................................. Chomsky () is rather vague as to how the lexicon is organized and how the treatment of word formation is encoded in the lexicon. His lexicalist hypothesis is a rather informal statement. Subsequently, it turned into the Lexicalist Hypothesis, which, formulated in different ways, dominated much of the discussion of generative morphology in the s and s (cf. also Montermini, Chapter  this volume). Two theories that elaborate on Chomsky’s proposal are Halle () and Jackendoff (). We will consider each of these in turn. Halle () outlines a general model of the lexicon that is perhaps more interesting because of the questions it asks and the framework they are asked in than for the answers it proposes. First of all, the title refers to ‘a Theory of Word Formation’. This takes us away from the perspective of generating expressions and establishes the word and its internal structure as an object of study. It is remarkable that this change of perspective is not at all highlighted in the text of the article. Halle is also the first to introduce Word Formation Rules (WFRs) as a separate type of rules in the context of Generative Grammar. Another noteworthy point is Halle’s (: ) explicit treatment of inflectional morphology along the same lines as derivational morphology. This means that word formation in his system should not be opposed to inflection. In Halle’s () model, the first component of the lexicon is a list of morphemes. WFRs are then invoked to combine these into words. He notes three types of idiosyncrasy in word formation (: –). These idiosyncrasies motivate a separate treatment of word formation, different from syntax, and require a special mechanism. One type is semantic, for example when refusal is the act of refusing, but the most prominent sense of proposal refers rather to what is proposed than to the act of proposing it. The second type is phonological, for example serenity has a shortened stressed vowel, but obesity does not. The third type of idiosyncrasy concerns the choice of morphemes, for example refusal (*refusion) but confusion (*confusal). In order to account for such idiosyncrasies, Halle (: ) proposes a model where there is a Filter between the output of WFRs and the words that are listed in a Dictionary. Words in the Dictionary can be inserted into syntax. The idea is that the Filter assigns a feature [– lexical insertion] to words such as *refusion that are possible according to the morpheme list and WFRs, but cannot be used in syntax. Because of the discussion of word formation and the use of WFRs, Halle () makes a much more modern impression than Chomsky (). However, a problem with Halle’s proposal is that the components of his model are not sufficiently constrained. This is particularly obvious in the case of the Filter. The Filter is supposed to assign a feature [ lexical insertion] to all possible words, that is, words that can be generated by WFRs.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  

It is hard to see how the Filter can be a finite device. Halle () does not introduce any constraints on WFRs that would make their output a finite set. The Filter takes the output of WFRs as its input and assigns the insertion feature individually to each possible word. While it may be difficult to determine the size of the lexicon, the infinity of the Filter is of a different order of magnitude. The size of the Filter is proportional to the set of possible words, based on the recursive application of all WFRs to all lexicon entries. Jackendoff () elaborates Chomsky’s () ideas about the lexicon in quite a different way. Whereas Halle () separates the List of Morphemes from the Dictionary of insertable words, Jackendoff () assumes a single place where simple and complex lexical entries are stored. This determines a rather different approach to WFRs. In Halle’s system, WFRs connect the List of Morphemes and the Dictionary. Jackendoff proposes that WFRs are redundancy rules, developing a notion mentioned by Chomsky (: ). In this way, Jackendoff can dispense with the Filter, which in Halle’s system undermines the finiteness of the grammatical description. Jackendoff (: –) distinguishes what he calls a Full Entry Theory from an Impoverished Entry Theory (see also Jackendoff and Audring, Chapter  this volume). In an Impoverished Entry Theory, such as Halle’s () model, only the information that cannot be found in rules is encoded in a lexical entry. This approach is deeply rooted in American linguistic thought. Bloomfield (: ) expresses it as ‘The lexicon is really an appendix to the grammar, a list of basic irregularities.’ It is also adopted in Chomsky and Halle (). In opposition to this view, Jackendoff proposes a Full Entry Theory, in which both simplex and complex words stored in the lexicon have full entries. Instead of not including information that can be retrieved from a rule in a lexical entry (i.e. in Halle’s List of Morphemes), Jackendoff develops a system that does not fully count such information in determining the load of storing the entry. In Jackendoff ’s model of the lexicon, refuse and refusal are both fully specified lexical entries. As such, Jackendoff goes counter to Chomsky’s (: –) proposal that refuse is underspecified for syntactic category. Similarly, confuse and confusion are fully specified lexical entries. Halle’s () problem of excluding *refusion and *confusal, for which he introduced the Filter, is therefore solved trivially. In Jackendoff ’s system they are excluded because they are not in the lexicon. Jackendoff ’s problem is to account for the fact that refusal is related to refuse in a fairly regular, predictable way. As WFRs are redundancy rules in Jackendoff ’s model, the rule for ‑al is in the lexicon in addition to the entries for refuse and refusal. In the first instance, adding a rule increases redundancy. However, Jackendoff (: –) introduces an ingenious system for calculating the cost of specifying information. When the entry for refusal refers to the entry for refuse, the information about refusal that can be predicted from refuse does not cost any extra. Similarly, when the entry for refusal refers to the rule for affixing ‑al, the cost of the entry for refusal is reduced by what can be predicted on the basis of the application of the affixation rule. The cost of the entry for refusal is then determined by the factors in (). ()

a. The cost of referring to refuse. b. The cost of referring to the rule for ‑al. c. The cost of specifying any idiosyncratic information.

The cost in (b) depends on how general the rule is. The more general the application of a rule, the less costly is the reference to it. Thus, a reference to the rule for ‑able, as in

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



readable, is less costly than (b), because in the case of ‑al there are several alternative suffixes for the same meaning and the choice between them is often idiosyncratic, as the example of confusion shows. In the case of highly regular morphology, for example the English nominal plural, the cost approaches zero. An important difference between Halle’s and Jackendoff ’s theories of the lexicon is that Halle assumes that refuse and ‑al are both included in the List of Morphemes, whereas Jackendoff (: ) only includes words in the lexicon. This means that refuse and refusal are lexical entries, but ‑al is not. It is only part of a redundancy rule. This approach to lexical entries means that a word can be analysed even when not all of its components are accounted for separately. Jackendoff (: –) gives the example of perdition. In the entry for this word, the rule of ‑ion nominalization can be referred to without making it necessary to introduce a hypothetical *perdite. Compared to (), for perdition, there will be nothing corresponding to (a), but there is more of (c). Still, the reference to a rule for ‑ion will be cost-effective, if enough information about perdition can be predicted by this rule. Halle () does not raise the question whether refuse should be further decomposed. Jackendoff (: –) discusses parallels between such English verbs as transmit, permit, admit, which can be decomposed into elements that correspond to a Latin prefix and verb. In the same way, refuse could be related to confuse. In Jackendoff ’s model, if refuse is analysed in this way, there will not be entries for re‑ or for ‑fuse, but a set of redundancy rules stating that verbs can consist of two components from a particular list. There is no semantic reason for such an analysis, but it is still possible to add a redundancy rule. The question is whether stating the redundancy rule and referring to it will reduce the cost of storing the entries of the lexicon. For verbs with ‑mit, the regular correlation with nouns in ‑mission may make a reference to such a rule cost-effective. In the case of refuse and confuse, there is no corresponding correlation of this type, so that it is questionable whether it is worth having such a rule. In principle, each speaker decides this (unconsciously, of course) independently. Halle and Jackendoff developed models of the lexicon that each stand at the basis of a tradition in Generative Grammar. For Halle, rules of morphology combine morphemes into words that can be inserted into a syntactic structure. For Jackendoff, morphology relates entries in the lexicon by means of redundancy rules. The legacies of these works are discussed in more detail by Montermini (Chapter  this volume); for recent advances in Jackendoff ’s theory see Jackendoff and Audring, Chapter  this volume.

. T 

.................................................................................................................................. Chomsky’s () Lexicalist Hypothesis was formulated in a context of controversial discussion about the proper architecture of a system of grammar. The main point of discussion was the relation between syntax and semantics. In Chomsky’s () model, Deep Structure, a syntactic representation resulting from the application of the rewrite rules of the base component and lexical insertion, is the basis for semantic interpretation. The alternative view, defended by George Lakoff, James McCawley, and others, tended to incorporate more and more semantic information into the Deep Structure. At some point, the idea of the Deep Structure as the basis for the transformations leading to the Surface Structure as well as for the interpretation rules leading to the semantic interpretation of the

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  

sentence was given up. Instead, a semantic structure was assumed as the basis for the transformations leading to the Surface Structure. In principle this meant that the flow of derivation between the semantic representation and Deep Structure was reversed. In practice, Deep Structure was abandoned. This is the basis for Generative Semantics. An example of a model that reflects these considerations is the one presented by McCawley (). In this model, the semantic representation is a tree structure that is manipulated by a sequence of transformations until the Surface Structure is reached. McCawley (: ) gives a famous derivation of kill that involves the stages in (). ()

a. b. c. d. e.

[ x [ [ [ y]]]] [ x [ [[ ] y]]] [ x [[ [ ]] y]] [[ [ [ ]]] x y] [kill x y]

McCawley gives the sentence to be derived in () as ‘x killed y’ (: ), but he does not discuss the past tense. He uses capitalized English words to represent semantic predicates. The deep structure in (a) is the point of departure for a series of raising transformations that bring together the four semantic predicates into one constituent in (d). The next step is a lexical insertion transformation replacing this constituent by the word kill in (e). For morphologically complex words, the individual morphemes are inserted, as in the lexical insertion transformation (). ()

a. [cause [become [red]]] ) b. [Ø [en[red]]]

In (a) the structure is given at the point where lexical insertion takes place. In (b) the individual morphemes are inserted into the structure and ‘a later suffixation rule would put en after red ’ (: ). For nominalizations, there is a special transformation preparing the elements in the structure for lexical insertion. Some crucial steps in a derivational sequence are represented in ().5 ()

a. b. c. d.

ι x (x invented the wheel) [NP x [S [V Invent] [NP x] [NP the wheel]]] ) [NP x [ Invent Agent] [NP the wheel]] The inventor of the wheel

The operator ι in (a) selects an entity in a semantic representation as the focus. Its function is similar to what Langacker () and Jackendoff (a) call profiling. The process and result readings of invention can be derived from structures similar to (a), but with x identified with the verb or with the object. The intermediate structures (b) and (c) are the input and output of the nominalization transformation. Lexical insertion will ultimately result in (d). The tree structures in () correspond to the ones in McCawley’s (: ) apparently corrected version. 5

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



One consequence of the treatment of lexical insertion in Generative Semantics is that morphologically complex words have the same underlying structure as morphologically simplex synonyms. Thus, thief and stealer result from different lexical insertion transformations applied to the same structure. Postal () makes the argument that this assumption is supported by the parallel constraints on anaphoric reference, as illustrated in () and (). ()

a. Max’s parentsi,j are dead and he deeply misses themi,j b. *Max is an orphan and he deeply misses them

()

a. the girl with long legsi,j wants to insure themi,j b. *the long-legged girl wants to insure them

In (a) and in (a) we have sentences with them and an antecedent in a syntactic relationship. In the corresponding (b) and (b), the antecedent is embedded in a lexical structure. This leads to ungrammaticality both in (b), where the antecedent is not visible at all, and in (b), where it is embedded in a complex word. Postal interprets the fact that it does not make a difference whether or not the morpheme is phonologically realized as evidence for a similar semantic structure for morphologically simplex words with complex meanings (e.g. orphan) and morphologically complex words (e.g. long-legged). The arguments against unrestricted transformations mentioned in §. also played a role in Generative Semantics. However, the condition that material that is deleted in a transformation should be recoverable was not interpreted in the same way as Chomsky did. An example is Lees’s () discussion of compounding. Whereas Lees () had assumed that the Deep Structure of compounds could involve any predicates that would subsequently be deleted (cf. §.), Lees () tries to reduce the range of predicates that can be deleted. For compounds such as car thief, no special provision is necessary, because, as noted above, a verb that can also be lexicalized as steal is present in the semantic structure. For other compounds, Lees (: –) proposes some thirteen so-called generalized verbs. Such generalized verbs are represented by an open-ended sequence of English verbs, for example ‘repel, prevent, reject, forestall, suppress, remove, . . . ’ (: ) for the predicate used in headache pill, fire engine, lightning rod, and mosquito net. Levi () brings this development to its logical conclusion. She presents a theory in which Recoverably Deletable Predicates (RDPs) take the place of Lees’s () generalized verbs. There are nine RDPs, three of which (, , ) can be used in both orders, for example tear gas and onion tears for . The other six (, , , , , ) are not reversible in this way. If a predicate is available in the Deep Structure, four nominalization operations can be applied,  (e.g. cell division),  (e.g. faculty decision),  (e.g. mail sorter), and  (e.g. designer creation). In delimiting the scope of discussion, Levi’s theory clearly shows her allegiance to Generative Semantics. This can be illustrated with the examples in (). ()

a. b. c. d.

financial report avian sanctuary city planner car thief

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  

For relational adjectives such as financial in (a), Levi () assumes the same underlying semantic structure as for the corresponding nouns. This means that (a) has the same semantic structure as finance report. This is reminiscent of Chomsky’s () suggestion that refuse and refusal are both products of the same lexical entry. For Levi, the relation between relational adjectives and corresponding nouns does not depend on any formal similarity. Thus, avian in (b) is inserted in a semantic structure that can also give rise to the insertion of the noun bird. Similarly, planner in (c) and thief in (d) have parallel semantic structures, each with a verb that is used in encoding the meaning of the full expression. Levi (: –) gives a number of examples of derivations, that is, paths from a semantic structure to a surface form. Example () lists the main steps in her derivation of thermal stress. ()

a. b. c. d. e. f. g. h. i.

[NP [N stress] [S  heat stress] ] [NP [N stress] [S cause [by heat] [of stress] ] ] [NP [N stress] [S be caused [of stress] [by heat] ] ] [NP [N stress] [S be [of stress] [ADJ heat caused] ] ] [NP [N stress] [S which [S be [ADJ heat caused] ] ] ] [NP [N stress] [ADJ heat caused] ] [NP [ADJ heat caused] [N stress] ] [N [N heat] [N stress] ] [N [ADJ thermal] [N stress] ]

The pattern in () is in many respects similar to the one in (), which represents Lees’s () derivation of assembly plant. One difference is that (a) is already an NP, because the concept of kernel sentences had been abolished. The transition to (b) marks two steps in Levi’s derivation, lexical insertion of cause and expression of cases by prepositions. In (c), a passive transformation has been carried out. The steps of inserting a relative pronoun in (e) and preposing the modifier in (g) are also found in (c) and (h), respectively. In (h) the RDP is deleted. The considerations for the individual steps are similar to Lees’s in (), but recoverability of deleted material has been added. The interpretation of the recoverability constraint adopted by Levi () implies that compounds are systematically ambiguous. A compound such as field mouse has twelve possible interpretations, corresponding to the nine RDPs, three of which are reversible. A compound such as police intervention has the nominalization interpretations in addition to the twelve interpretations based on RDPs. Some of these are not particularly plausible, for example  in relation to field mouse, but they are rather discarded on the basis of world knowledge than on the basis of grammatical considerations. The availability of a verb in police intervention does not automatically mean that the verb is interpreted as yielding the predicate. We can imagine ‘intervention  the police’ as a plausible interpretation. The ambiguity is only avoided if the compound has been lexicalized with a particular meaning. Such meanings are usually much more specific. Levi (: ) gives the example of ball park, where  is the relevant RDP, but the meaning component that it is designed to play baseball in is the result of lexicalization. Transformational approaches such as pursued in Generative Semantics can be seen as a direct continuation of the theories advanced in the earliest stages of Generative Grammar.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



Compared to the lexicalist approaches of §., they have been much less successful. Their influence should not be underestimated, however. For a long time, introductions to morphology typically started with an argument why transformational approaches along the lines of Generative Semantics were wrong. The discussion between proponents of the two approaches has contributed significantly to the clarification of the aims and conditions of a good theory.

. G  

.................................................................................................................................. In early Generative Grammar, there were various competing theories with mutually incompatible assumptions. It is therefore not possible to give, for instance, the early generative approach to productivity. Moreover, many issues that seem to be core issues now only arose after the period treated in this chapter. Nevertheless it is worth describing some general trends and widely shared assumptions. Most of the theories discussed here do not make a rigorous distinction between morphology and syntax. Before the lexicon was introduced, morphemes were the terminal symbols of a rewrite grammar. With the introduction of the lexicon, there was for the first time the possibility of describing words as at the same time having specific properties and having an internal structure. Relations between words could now be specified in the lexicon rather than in transformations. This raised the question of how the generation of tree structures would communicate with the lexicon. In Halle’s () model, the Dictionary contains the items that can be inserted into syntactic trees. This can be seen as a basis for an autonomous morphological component in the sense that syntax does not look at the List of Morphemes or the mapping between this list and the Dictionary. In Jackendoff ’s () model, the lexicon is a speaker’s mental storage of entries that can be inserted into a syntactic structure. These entries can be complex, but this does not matter for syntax. Word formation rules and syntactic rules can generate new expressions and serve as redundancy rules. In Generative Semantics, lexical insertion is the replacement of (combinations of) semantic primitives by lexical material. The discussion of morphology in this period is largely centred on word formation. An influential proposal for inflection was Chomsky’s () account of affix hopping, but afterwards there were few explicit theories of inflection. Chomsky () suggested that the gerund should not be treated in the same way as nominalizations of the type refusal, but he does not elaborate how this difference should be expressed. The absence of explicit discussion means that linguists continued to work from different implicit assumptions. On one hand, it could be assumed that most if not all of what affected word formation also affected inflection. On the other, it could be assumed that inflection continued to be in the domain of syntax, broadly along the lines of Chomsky’s () affix hopping account. The question of productivity can be said to be at the very basis of Generative Grammar. The idea of a rule system stems from the observation that new expressions of the language can be generated as needed. The more specific question of lexical productivity could only be formulated after the lexicon was added as a component. Differences in productivity were at the basis of Chomsky’s () argument why nominalization should be dealt with in the lexicon, but gerunds in syntax. However, this is not a point about morphological

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  

productivity, a concept that would only be discussed later. Similarly, the concept of blocking was only implicitly present. Halle () uses his Filter as a blocking mechanism, but it is rather a crude solution. In Jackendoff ’s () Full Entry Theory, the idea is that the retrieval of a stored entry is faster than the formation of a complex word from its components, so that the presence of an entry in the lexicon explains the blocking of the formation of a competing word with the same meaning.

. E  

.................................................................................................................................. In this chapter we have seen a range of theories, developed as elaborations of or reactions to each other. From the current perspective, the influence they had on later theories is more important than the explanations they offered at the time they were proposed. Still it is worth considering their strengths and weaknesses as well as their legacy. The earliest generative theories were important as demonstrations of how the new framework worked. Chomsky’s () treatment of inflection was part of a presentation of the framework. Lees () was one of the first more extensive studies in Generative Grammar. Therefore, they were used as a basis for further theoretical development. As explained in more detail in ten Hacken (), although Lees’s () theory used far too powerful rules, it was only its formulation and discussion that showed the need for constraints on the formal power of transformations. Chomsky () was a central publication in the development of Generative Grammar. In part this is because of X-bar theory, which was more important in syntax than in morphology. The Lexicalist Hypothesis, however, was at the basis of the revival of morphological research in Generative Grammar from the mid s. Halle () and Jackendoff () develop Chomsky’s () proposals in different directions. Both are above all programmatic, concentrating on the framework rather than on data analysis. They paved the way for PhD dissertations such as the ones by Aronoff () (see also Montermini (Chapter  this volume) for details) and Siegel (), which provided more detailed analyses of a wider range of data. The opposition between Halle () and Jackendoff () should not obscure one important similarity. Both assume that there are WFRs that operate in some way on lexical entries. The concept of WFR is an innovation of the early s. In earlier writings, the focus had been on generating phrases and sentences by combining morphemes. This can be observed even in Chomsky (). The main difference between Halle’s and Jackendoff ’s approaches is that Halle keeps morphemes as the basic units, whereas Jackendoff adopts the Full Entry Theory, with words rather than morphemes as units. In the analysis of a word such as unreasonable, Halle would have three basic units, un‑, reason, and ‑able, whereas Jackendoff assumes entries for reason, reasonable, and unreasonable. In Halle’s theory, WFRs combine morphemes, in Jackendoff ’s theory they relate words. Both approaches underlie a range of later theories, which demonstrates their perceived strength. The transformational approaches developed in the framework of Generative Semantics can be seen as the most direct descendants of the earlier theories. That they were no longer pursued after the s owes at least as much to the fact that Generative Semantics was abandoned as a framework as to inherent weaknesses. There has been a lot of discussion

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



about the mix of social, empirical, and theoretical causes of this development, cf. Newmeyer (: –), Huck and Goldsmith’s () alternative analysis, and Newmeyer’s (: –) reaction to it. The approach of setting up a limited set of RDPs is arguably the most straightforward reaction to the constraint of recoverability of material deleted in a transformation. However, as noted by ten Hacken (), any such set runs into the problems that RDPs tend to overlap, leave gaps, and be too vague. Against this background, Allen () proposed the Variable R condition, which leaves the relation between the components of a compound to be specified on the basis of the meaning of the components rather than a set of RDPs linked to the compounding rule. Both approaches provided inspiration for later development. As argued by ten Hacken (), Lieber’s (a) theory of compounding can be seen as building on Allen’s () basic idea that the relation of the components of a compound is a result of the semantics of the components (cf. Lieber, Chapter  this volume), whereas Jackendoff ’s (a) proposal assigns an important role to what he calls ‘basic functions’. Crucial, however, is that Lieber’s and Jackendoff ’s proposals are much more refined than their ancestors, showing the progress in theoretical development in the intervening decades. The main objection applied quite generally to all theories presented in this chapter is that they are too powerful. The central issue in a transformational approach is to constrain the power of transformations. In Halle’s () system, it is the Filter that is in need of constraints, in the first instance in order to make it a finite device. Jackendoff ’s () model was at the basis of what ultimately developed into the Parallel Architecture, cf. Jackendoff (b: –). This model has often been criticized as imposing too few constraints on possible analyses. Both approaches are at the basis of frameworks that are in vigorous health. The morpheme-based approach adopted in Halle () has led to Halle and Marantz’s () Distributed Morphology (see Siddiqi, Chapter  this volume). Jackendoff ’s () word-based approach led him to develop a theory of lexical semantics first, Jackendoff (, ), which then incorporated morphology (Jackendoff : –; Jackendoff and Audring, Chapter  this volume). When we evaluate the development of generative morphology in its first two decades, we observe that the advances were significant. By the end of this period, many of the most basic problems had been identified and some general constraints on the framework had been formulated. This then cleared the ground for such proposals as Siegel’s Level Ordering and Aronoff ’s blocking, which belong to a more mature phase of Generative Grammar.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

                     :           ......................................................................................................................

 

. I

.................................................................................................................................. T labels ‘Lexicalism’ and ‘Lexicalist Morphology’ refer to a set of approaches that share, as a common ground, the idea that an independent module—commonly referred to as the lexicon, or simply as morphology—is the locus of word-internal (inflectional or derivational) phenomena, which should be accounted for by means of specific principles. In particular, these labels encompass the various morphological models that have been developed following Chomsky’s work on syntax (Chomsky , ) and on phonology (Chomsky and Halle ).1 The present-day debate in morphology is still influenced, at least in part, by ideas which surfaced at the end of the s within generative syntax and were subsequently elaborated by morphologists. Chomsky’s seminal Remarks on Nominalization (Chomsky , henceforth Remarks), in particular, is among the papers most cited and commented upon by morphologists over the last four decades, although it does not contain any specific proposal to account for morphological or lexical facts, but rather suggests that a boundary between syntax and morphology should be drawn and outlines a procedure for doing so.2

1 A terminological note is due here. Although the terms ‘Lexicalism’ and ‘Lexicalist’ contain reference to the lexicon, and some authors tend to consider ‘Lexicalist Morphology’ and ‘Lexical Morphology’ as synonyms (see Scalise : ), the latter is a generic and polysemous term that covers a larger set of concepts (including what is today more commonly referred to as ‘derivational morphology’). ‘Lexicalism’ and ‘Lexicalist Morphology’, on the other hand, have been more or less institutionalized with the meaning indicated. It should be noted, moreover, that the ‘lexicon’ which is the base of ‘Lexicalism’ is the module of grammar where morphological phenomena take place and not simply the list of words of a language (see §.. for the ambiguity of the term ‘lexicon’ in the generative tradition). (I thank an anonymous reviewer for pushing me to think about this issue.) 2 For detailed readings of Remarks see for instance Roeper () or the twenty-year long debate between Marantz () and Aronoff (, a, b).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



In this respect, the emergence and development of Lexicalism constitutes a historically determined phenomenon. Yet it is not easily characterized within the landscape of linguistic studies. Some authors consider it, at least in its first developments, a ‘well defined framework’ (Scalise and Guevara : ), although it does not truly constitute a school or a theory. In fact, the label ‘Lexicalism’ encompasses a wide range of works on many aspects of morphology which, in their majority, subscribe to the Lexicalist Hypothesis (LH) suggested by Chomsky in Remarks. According to this hypothesis, the properties of (some) complex words cannot be accounted for by the same principles that hold for syntactic constructions. Thus, a primary distinction between word-external (syntax) and word-internal (morphology) grammar is advocated, with the lexicon as the locus of the principles responsible for word-internal structure. Some other properties, not yet formalized in Remarks, can be considered common to work in Lexicalist Morphology. One is to conceive the relations between simple(r) and complex words in terms of rules; another is the idea that words (whether simple or complex) may have or acquire idiosyncratic properties that are declaratively expressed in the lexicon. In the line of generative approaches to syntax, in the majority of works inspired, directly or indirectly, by Remarks, rules have a clearly derivational nature, whereas morphological units (words) are listed in the lexicon.3 Finally, several approaches that can be gathered under the label ‘Lexicalism’ share the view that words are relevant units at some level of linguistic analysis, and contribute to the theoretical characterization of words as lexemes, that is, as lexically stored abstract units provided with (at least) formal, semantic, and syntactic properties (e.g. French Adj ‘big’), embodied in real linguistic utterances by a set of inflected word forms (e.g. [ɡʁã].; [ɡʁã].; [ɡʁãd].; [ɡʁãd].) (see §..). In §. I present the historical background of Lexicalist Morphology, starting from Halle’s () and Jackendoff ’s () seminal papers, and discuss the developments undergone by the lexicalist paradigm in the following decades. Section . discusses some key concepts of Lexicalist Morphology, particularly those present at the outset, with the aim of assessing the extent to which these notions are still valuable and operational for present-day approaches. Section ., finally, contains some concluding remarks, and proposes an overall assessment of Lexicalist Morphology, its history, and its legacy.

. H 

..................................................................................................................................

.. Early generativism Early works within the generative paradigm were strongly influenced, at least in their conception of morphological facts, by the post-Bloomfieldian structuralist approach to 3

It should be noted that post-Chomskyan approaches do not cover the full range of theories to which the label ‘generative’ can be assigned. Theories such as Head-Driven Phrase Structure Grammar (HPSG, see Pollard and Sag ) and some versions of Construction Grammar (e.g. Kay ; Fillmore ; Michaelis ), both of which include principles intended to handle word-internal facts, share with Chomskyan approaches the idea that the main task of linguistics is to predict the set of possible linguistic expressions (and only those) together with their structural description; the crucial difference, however, is that these theories are based on declarative principles rather than on derivational rules.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

language.4 More specifically, early generativists borrowed from structuralism the idea that the lexicon is a list of irregularities whose only function is to provide units (morphemes) to be handled by syntactic rules, and that only syntax is responsible for the concatenation of units into larger structures, both below and above the word level. In classical treatments such as Chomsky () or Lees () the construction of complex words, as well as sentences, was realized by means of transformations, the main theoretical device of early generative theories, operating on morpheme sequences. In turn, allomorphy and other kinds of variation observed in derived and inflected words, such as in the pair destroy/ destruction, were analysed in terms of unitary base forms, possibly modified by phonological rules (see Chomsky : ; Chomsky and Halle ). Up to the end of the s, therefore, it was assumed that syntax regulated the structure of complex words, while phonology was responsible for their surface forms, so, as Anderson (: ) puts it, ‘[w]ith neither morpheme distribution nor allomorphy to account for . . . morphologists could safely go to the beach’.

.. The Lexicalist Hypothesis Chomsky’s LH, first mentioned as an option in Remarks, constituted a major turning point for the re-emergence of morphology in the early s. As some authors, such as Anderson (: ) and Aronoff (a: ), point out, it was not a particular interest in morphological matters that inspired Chomsky to formulate the LH, but rather the desire to reduce the excessive power transformations had acquired, especially in generative semantics. Moreover, the LH, in its original formulation, does not take the form of a strong claim. Chomsky presents it in the context of an empirical choice between a strict transformationalist position and a lexicalist position, that is, the view according to which some derived words may have properties which are not derived by transformations. These words may be directly inserted into syntactic structure by means of lexical insertion (see Chomsky : ). The lexicon and its structure are clearly not the central focus of Chomsky’s treatment. Concerning the formal properties of derived words, he confines himself to observing that ‘[f ]airly idiosyncratic morphological rules will determine the phonological form of refuse, destroy, etc., when these items appear in the noun position’ (see refusal vs. destruction) (Chomsky : ). Concerning their semantic and categorial properties, however, he acknowledges that ‘it may be necessary to extend the theory of the lexicon to permit some internal computation’ (Chomsky : ). Without going any further into the matter, he suggests that ‘[i]nsofar as there are regularities . . . , these can be expressed by redundancy rules in the lexicon’ (Chomsky : ). It can hardly be said that these scant observations constitute the basis of a theory of the lexicon. Remarks is best known for its role in the emergence of an autonomous morphological component, and in the elaboration of a theory of (derivational) morphology. However, as observed by Aronoff (a: –), what has made the LH so popular and influential for morphology right up to the present day is probably the interpretation it was given by some authors (including Aronoff himself) rather than its original formulation. 4

For a more detailed discussion of the topics addressed in this and the following section, see ten Hacken (Chapter  this volume).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



.. Early lexicalist models (Halle ; Jackendoff ) In the years that followed the publication of Remarks several contributions to morphology appeared which adhered more or less explicitly to the LH. Some of these were influential in setting the agenda of morphology for subsequent decades and are at the origin of a number of distinctions which are still valid today. The first of these contributions is Halle (). Although explicitly programmatic, Halle’s paper represents the first attempt to establish a theory of morphology within a lexicalist approach and is of great importance for the later development of the domain. The main contributions of Halle’s proposal concern the analysis of lexical relations in terms of rules, the identification of such relations as a separate block within the lexicon, and the observation that not all words permitted by morphology are equally attested, thus setting up a distinction between the potential and actual words of a language. It should be noted that Halle’s Word Formation Rules (WFRs) include both derivational and inflectional morphology. Moreover, although Halle’s conception of the List of Morphemes remains quite close to the traditional structuralist view, that is, a list of items with idiosyncratic properties, he introduces the idea that the units of the lexicon are not simply sound–meaning pairings, but must be provided with some extra information, for example about their category and their combinatorial properties (Halle : –). In an article conceived almost simultaneously and published in , Jackendoff proposes a treatment quite different from Halle’s, although it has the same goal, namely a theory of the lexicon within the new perspective opened up by Chomsky’s LH (Jackendoff ). More clearly than Halle, Jackendoff establishes the kind of information contained in a lexical entry and manipulated by a WFR, that is, formal (phonological), syntactic, and semantic properties (see §..). Moreover, he advocates a lexicon which consists of fully specified words rather than of morphemes. Thus, Jackendoff ’s approach is clearly wordbased (or, in today’s terms, lexeme-based), which represents an important difference from Halle’s. Indeed, these two papers can be considered the first to illustrate a major divide which has spanned the whole history of morphology over the past forty years and remains relevant today: morpheme- vs. word-based approaches. Jackendoff also introduces a distinction between impoverished-entry and full-entry views of the lexicon (see §..), which became an important topic of discussion in lexical theory some decades later, notably in connection with psycholinguistic and cognitive issues. A final important difference between Halle’s rules and Jackendoff ’s is that, in the latter model, morphemes tend to be viewed as the formal manifestation of processes rather than as objects (i.e. signs). Such a view has the wider consequence of obliterating (or at least calling into serious question) the rules–representations divide (see §..). According to the format of rules he presents, Jackendoff ’s (: –) approach is clearly of an Item-and-Process type, where affixes are merely formal exponents of word-to-word correspondences, rather than units that need to be stored in the lexicon by themselves. Although on publication Jackendoff ’s paper did not elicit as much interest as Halle’s, it provides a much more detailed analysis of precise cases of word formation, including apparently less central issues such as compounding or the treatment of idioms. To summarize, it adopts a view of the lexicon and of morphological rules which is closer to that implemented in several present-day theories of morphology (see Jackendoff and Audring, Chapter  this volume; Masini and Audring, Chapter  this volume).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

.. Derivation in the lexicalist framework The most detailed and complete elaboration of a theory of word formation within the lexicalist paradigm is contained in Aronoff ’s PhD dissertation, published in  as Word Formation in Generative Grammar (Aronoff , henceforth WFGG). Although several of the issues tackled by Aronoff and the solutions he proposed have since been criticized or revised, his work remains a key reference in Lexicalism. Aronoff ’s main concern in WFGG was to provide a characterization of the principles governing derivational morphology within the architecture sketched by Chomsky in Remarks. His model is thus a modular one, in which ‘morphology is isolated and removed from the syntax’ and derivational phenomena are ‘dealt with in an expanded lexicon, by a separate component of the grammar’ (Aronoff : ). Moreover, Aronoff explicitly developed his model to deal with derivational facts, as opposed to inflectional ones. This distinction, although traditional, had been obliterated by Bloomfieldian and generative linguists, including Halle () and Jackendoff (), for whom word formation encompassed both derivational and inflectional phenomena. The two fundamental aspects in which WFGG was most influential for subsequent models of morphology are (a) the identification of words as the basic units on which morphological derivational rules operate (see §.. below, however); (b) a detailed characterization of the functioning of WFRs, including the conditions and readjustments they are submitted to. These elements have all been discussed and modified over time, but still constitute fundamental features of several present-day lexicalist models of morphology. Although some aspects of the characterization of the notion of word given by Aronoff have later been criticized, several of the features identified, or just suggested, in his book are still standard for the representation of these linguistic units. These include the tripartite (phonological, syntactic, and semantic) structure of words, the fact that not all properties of lexemes are, or remain, entirely compositional, and the fact that the set of possible words, as defined by WFRs, does not equate to the set of actual or attested words. Aronoff ’s model is closer to Jackendoff ’s () in taking words (present-day lexemes) as the basic elements of morphology. However, since Aronoff focuses more on the generative power of derivational rules than on their ability to account for the existing lexicon, the rules he proposes are closer to Halle’s WFRs5 both in their spirit and in their formal representation. In WFGG, rules are mechanisms which possess the double capacity of defining a set of inputs (their ‘base’, Aronoff : ) and a set of operations to be performed on these inputs in order to obtain an output. An important feature of Aronoff ’s WFRs is that they may operate on the formal, semantic, and syntactic properties of the words they apply to. In this respect, a WFR can be viewed as a means of establishing a correspondence between the phonological, semantic, and syntactic (primarily categorial) properties of a simple(r) and of a complex word, a conception which is still common, with little modification, today. The formalism adopted in WFGG also serves to highlight the nature of WFRs as primarily generative devices. In () I give the WFR corresponding to ‑ee suffixation in English, adapted from Aronoff (:  and ): 5 Aronoff himself views his rules simultaneously as generative rules for creating new words and as redundancy rules allowing the analysis of the existing lexicon, although he seems to consider the latter as a secondary function derived from the former (Aronoff : ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    :  ()



[[X]V +ee]N +transitive +animate object

Example () should be read as follows: ‘the phonological sequence corresponding to ‑ee [/iː/] can be attached to any transitive verb taking an animate object in order to produce a noun’.6 Several factors are responsible for the selection of bases by WFRs; Aronoff acknowledges that there may be phonological, syntactico-semantic, or purely morphological conditions on the possible inputs of WFRs. Below, I give an example for each type: • phonological: the suffix ‑al in English only occurs after stressed vowels, optionally followed by a sonorant and by a dental consonant (cf. denial vs. *constructal, Aronoff : ); • syntactico-semantic: the prefix re- only attaches to bases which are verbs ‘whose meanings entail a change of state, generally in the object of the verb’ (Aronoff : ); • morphological: the suffix ‑ity preferentially attaches to bases ending with the suffixes ‑ic, ‑al, ‑id, and ‑able (Aronoff : ). Another important issue discussed by Aronoff concerns the nature of the exponents of WFRs: in WFGG an affix (as well as any other phonological operation performed on a base, e.g. a reduplication) ‘cannot be separated from the rule, because it is nowhere given any representation of its own’ (Aronoff : ). According to this point of view, a sharp distinction in nature is established between words, basic units of morphology corresponding to linguistic signs, and the phonological material (or the phonological operation) connected to the application of a morphological rule, and consubstantial with it. Such a view has two consequences: first, in accordance with Aronoff ’s criticism of morphemes as signs, it endorses the idea of an autonomous morphological component, in the line of Jackendoff ’s  paper; secondly, however implicit this point remains in WFGG, it allows a unified treatment of non-concatenative and concatenative (affixal) cases of morphology (see §..). WFGG can thus be considered a work fundamental to the renewal of morphology which began in the early s and boosted the vitality this field of research has displayed from then on.

.. Inflection in the lexicalist framework In the works I have discussed so far, authors either focus exclusively on derivation (Aronoff ) or subsume inflection, tacitly or explicitly, under the broader label of word formation, thus suggesting that no relevant distinction between the two main functions of morphology (creating lexical items and encoding syntactic relations and grammatical information)

Aronoff ’s (: –) formalism is in fact slightly more complicated, as it is also intended to account for cases of mutual suffix deletion, e.g. nominate ! nominee. 6

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

should be made (Halle ; Jackendoff ). The consequences of the LH, however, also include the emergence of a debate about the similarities and distinctions between these two domains, and about what place should be attributed to all those morphological phenomena which undoubtedly interact with syntax. A particularly extreme position, in this respect, is the one Scalise (: –) characterized as the Strong Lexicalist Hypothesis which was expressed, for instance, (as the Generalized Lexicalist Hypothesis) by Lapointe (: ) in the following terms: ‘No syntactic rule can refer to elements of morphological structure.’ According to this view, syntactic rules cannot manipulate or make reference to any aspect of word-internal structure, and in some cases non-transformational (lexical) rules are extended to account for phenomena which are traditionally handled in syntax, such as passive or dative constructions, as in Bresnan (). The inflection/derivation distinction is also suppressed in some studies published a few years later, such as Lieber (), Williams (), and Selkirk (). In Selkirk’s treatment of word-level phenomena, in particular, transformations are abandoned in favour of X-bar schemas, which had also been proposed by Chomsky in Remarks, and constituted a major device in generative syntax at that time. Although the distinction between the lexical component and syntax is maintained in these works (see Lieber : ; Selkirk : –), it is weakened by the fact that rules having the same format operate at both levels. The Strong Lexicalist Hypothesis has gained less following within the lexicalist paradigm than what has been labelled, by contrast, the Weak Lexicalist Hypothesis. According to the latter view, there are at least some aspects of word structure which are relevant for, and can be manipulated by, syntax. In a list proposed by Anderson (: –) these include, for instance: configurational properties, such as case assignment to nouns within NPs, and agreement and inherent properties, such as gender or number for nouns (see also Anderson : –). Indeed, Anderson’s () proposal, further developed in Anderson (), constitutes the most detailed theory of inflection within the lexicalist paradigm. In his article, Anderson devotes several pages to a discussion of the differences between inflection and derivation, to conclude with the apparently trivial observation that ‘[i]nflectional morphology is what is relevant to the syntax’ (Anderson : ), which he takes as the starting point for the elaboration of a full theory of inflection consistent with the LH and the idea that words are the basic elements of morphology. Like Aronoff in WFGG, Anderson makes a sharp distinction between derivation and inflection, although he acknowledges that the same semantic category may be inflectional in some languages and derivational in others.7 In his view, derivational operations are confined in the lexical component, whereas inflectional rules are visible to syntax, a position which has subsequently been labelled the Split Morphology Hypothesis (see Perlmutter ). In spite of their different goals, there are two other important similarities between Aronoff ’s model of derivation and Anderson’s treatment of inflection: the status of rules as fundamental devices of both approaches, and the view that morphological exponents are not linguistic signs per se, which motivates the label ‘a-morphous’ attributed by Anderson himself to his theory in his  book. In these approaches, rules are devices correlating a morphosyntactic or a morphosemantic content with a phonological operation.

For instance ‘plural’, which Anderson (: –) considers to be derivational in Kwak’wala, an indigenous language of Canada. 7

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



A central notion in Anderson’s model is that of morphosyntactic representations, serving as terminal nodes of syntactic structures and coupled with lexical material (stems) provided by the lexical component (see Anderson : –). The mechanisms which are responsible for this association are inflectional rules of the format illustrated in () for the rule forming past tense verb forms in English, where ‘X’ represents an unspecified lexical string corresponding to the stem (Anderson : ): ()

+Verb +Past /X/ → /X#d/

A lexical entry may also contain more than one stem, bearing a morphosyntactic specification. For instance the verb  possesses a representation containing a stem, thought, marked as [+Past], which overrules () and is inserted as a full form (Anderson : ). Anderson’s model links a lexicalist approach to morphology to previous approaches (such as that of Matthews ) viewing morphology as the study of word form relations (possibly within larger networks of relations, such as paradigms) rather than of the construction of forms by combination of simpler units.

.. Further developments of lexicalist models Research on morphology was boosted by the new theoretical frameworks that emerged following the LH. As a consequence, the following decades saw a dramatic expansion of this field of research, the effects of which are still visible today. Most of the approaches to morphology adopted and developed today are indebted (some explicitly and consciously, others less so) to the seminal works described in the previous sections. Actually, some ideas seem to be uncontroversial and widely accepted in the majority of mainstream approaches to morphology today: for instance, the idea that word-internal structure is dealt with in a particular module, distinct from syntax and phonology, which is not called ‘lexicon’ any more, but simply ‘morphological component’ (see §..), and which has its own set of rules (whatever reality this concept covers). Indeed, many approaches recognize that the module in question may contain two sets of rules (possibly in interaction), accounting for the submodules corresponding to derivational and inflectional morphology, respectively. It should be noted, however, that not all morphological models accept the view that morphology constitutes a separate module from syntax.8 The most prominent model of morphology developed within and in connection with the generative framework in syntax is Distributed Morphology (DM), first elaborated by Halle and Marantz () (see also Siddiqi, Chapter  this volume). DM explicitly rejects the LH (see also Marantz ) and 8

The morphology/syntax divide is also weakened, from a radically different perspective, in models such as Construction Morphology (Booij a) and Jackendoff ’s Parallel Architecture (Jackendoff , b); see Masini and Audring (Chapter  this volume) and Jackendoff and Audring (Chapter  this volume).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

considers syntax as the sole device responsible for the construction of linguistic structures, a position which in recent developments of the theory has been dubbed the ‘Single Engine Hypothesis’ (Marantz ). Consequently, both the structure and the categorization of words are dealt with by using the same units and operations that are used in syntax. Thus, words such as destroy or destruction correspond to the spell-out of the merging of the abstract uncategorized root √ with, respectively, the functional heads v and n, with no independent morphological operation intervening in the set-up of their formal or semantic properties. In the field of derivation, the legacy of WFGG is still visible in many respects. Although its main focus is on English word formation, Aronoff ’s book was successful in defining a framework based on which similar treatments of derivational morphology in various languages were proposed in the following years (see for instance Booij  on Dutch, Scalise  on Italian, Corbin  on French). Moreover, after its publication, a great deal of work was produced to provide a more detailed characterization of WFRs, to improve their coverage, and to account for the restrictions they were subject to and their interaction with other fields of linguistics (e.g. phonology). In addition, WFGG anticipated or outlined several other questions which continue to be central in today’s morphological debate. These include the distinction between possible and actual (attested) words (see Di Sciullo and Williams ; Rainer ), affix ordering (see Fabb ; Hay and Plag ; Manova and Aronoff ), compositionality of complex words (see Aronoff ), and productivity of WFRs (see Plag ; Bauer ). Concerning inflection, as observed above, Anderson’s (, ) model focuses more on relations between word forms than on the construction of individual forms, in which respect it follows traditional morphological models. Much present-day research in inflection follows from the premises set out by Anderson, particularly in models that, in Stump’s () terminology, can be classed as ‘realizational’. These models include a wide range of works which explore, on one hand, the role of paradigms in morphological organization (see Carstairs ; Aronoff ; Stump ; Maiden ; Brown and Hippisley ), and, on the other, the representation of inflectional rules (see Zwicky ; Stump ; Spencer a). The LH opened the way for various linguists to propose models of morphology intended to constitute a general framework suitable for dealing with both derivational and inflectional phenomena. What these models have in common is that they adhere to the LH; they differ, however, in the status they attribute to inflection and derivation. While some models adopt a strict Split Morphology view, with different modules responsible for inflectional and derivational operations, others do not acknowledge a difference in nature between the two types of phenomena, which are thus dealt with in the same component and by the same kind of rules. Comprehensive models of morphology explicitly founded on the LH that were developed in the decades following the emergence of Lexicalism, include Beard’s Lexeme-Morpheme Base Morphology (LMBM, Beard ), Bochner’s Lexical Relatedness Morphology (LRM, Bochner ), and Wunderlich’s Minimalist Morphology (Wunderlich and Fabri ; Wunderlich ). In the LMBM framework, morphology is clearly split, with lexical derivation realized in the lexicon and inflection (‘inflectional derivation’ in LMBM terms), along with typical syntactic operations, realized in syntax (see Beard : –). An interesting feature of this model is that derivational and inflectional rules are regarded as identical in nature, as they manipulate the same universal functions belonging to a limited set. The difference between the two emerges from their relative ordering,

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



derivational rules operating before inflectional ones and before syntactic rules in general. LRM adopts and develops Jackendoff ’s () mechanism of redundancy rules in order to account for the morphological structure of words in the lexicon. It is based on a lexicon and on a system of rules which are more complex than Jackendoff ’s, and are intended to account for phenomena such as sequences of affixes in complex words. Unlike the approaches presented so far, LRM is not split: inflection and derivation are considered as equal in nature and consequently handled by means of rules of the same format. Finally, in Minimalist Morphology9 words are formed in a specific morphological component through the concatenation of a stem and an affix. It should be noted that the forms thus constructed by morphology are organized into paradigms, which constitute the interface level between morphology and syntax. In the Minimalist framework, morphology is not split either: both derivation and inflection are dealt with in the morphological component by rules which display only minor differences. A series of proposals published during the decade following the emergence of Lexicalist Morphology (Lieber ; Williams ; Selkirk ; Di Sciullo and Williams ) adopted an even more extreme stance. What these works have in common is that, in various forms, they model morphological rules on phrase-structure rules, although the two sets of rules are acknowledged to operate in two separate modules. Even more radically, Lieber () claims that morphological and syntactic rules coincide and operate in two separate components. A framework worth mentioning in this context is Lexical Phonology (LP, or Lexical Morphology and Phonology, Kiparsky a). This model (first elaborated in works by Siegel ; Kiparsky a; Halle and Mohanan ; Mohanan ) was primarily an attempt to reconcile Chomsky and Halle’s () approach to phonology with the LH, and constituted one of the leading paradigms in phonology in the s and early s. One of the main concerns of phonology, at that time, was to account for the observation that different affixes may have different phonological effects (e.g. concerning stress) on the bases they attach to, and in particular that some affixes seem to be more tightly bound to their bases than others. Chomsky and Halle’s () original proposal integrated information about the behaviour of affixes into their symbolic representation, in the form of boundaries (+, #, or =) which had different ‘strengths’ and were responsible for the (non-)application of different phonological operations. Since most of these operations take place in correspondence with morphological phenomena, it was quite a natural step, in the theoretical context induced by the LH, to postulate that the lexicon is the module in which not only morphological rules, but also phonological rules, operate. The parallelism between phonology and other modules of grammar was pushed even further, since LP postulated that a second phonological set of rules, the so-called postlexical rules, operated after all morphological rules, concomitantly with the construction of syntactic structures. Thus, according to standard versions of LP, phonological and morphological rules operate together in the same component. However, in order to account for the differing phonological behaviour of different morphological phenomena (namely derivational or inflectional affixation and compounding), rules operate at different levels (also known as strata), which

As acknowledged by its proponents (Wunderlich and Fabri : , fn. ), the framework of Minimalist Morphology was elaborated independently of and in parallel with the Minimalist theory in syntax (Chomsky ), although the two theories share some basic assumptions. 9

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

are ordered. Each level contains different morphological and phonological rules, operating jointly within the same level; the outputs of each level are the inputs of the following level; and the outputs of the final level (i.e. fully specified phonological words) are the inputs of syntactic structures which, in turn, undergo postlexical phonological rules. Different versions of LP may differ in some respects, for example the number of levels identified, whether rules may operate cyclically across levels or not, and whether an affix belonging to a specific level is an intrinsic or a relative property. Nevertheless, all share the general structure described above. The level-ordering of morphological and phonological rules is intended to account for various phonological phenomena, all of which are tightly linked to morphology. For instance, the fact that some English suffixes determine the stress pattern of a derived word while others do not (cf. átom ! atómic vs. átomize) suggests that they should belong to different levels. In LP the suffix ‑ic is attached to its bases at the same level at which word stress is assigned (Level I, e.g., in Siegel  or in Kiparsky a), whereas ‑ize is introduced at a later level, where stress assignment is not possible any more (Level II). Level-ordering of affixes is also intended to account for the linear ordering in which they appear in complex words. Consequently, at least in English, (regular) inflectional affixes belong to the final level, since they generally follow all derivational affixes. In this theory, the difference between inflection and derivation is not due to a difference in nature between these two classes of operations, but merely to a difference in their ordering. Levelordering has been strongly criticized both on empirical and theoretical grounds, especially on the basis of morphological arguments (see Aronoff and Sridhar ; Fabb ; Bochner ; Hay and Plag ), and LP itself has been superseded, even in phonology, by other theoretical frameworks.

. U    L M

.................................................................................................................................. In this section I will review some of the main theoretical tools and concepts elaborated and adopted by morphological theories within the lexicalist paradigm. Some of these concepts (e.g. ‘word’ or ‘morpheme’) were inherited from the existing linguistic tradition, others (such as the conditions on the input and on the output of morphological rules) were specifically conceived within Lexicalist Morphology to account for morphological facts. A third set is constituted by concepts (e.g. ‘head’) that originated in other linguistic domains (in this case syntax and partly phonology) and were adapted to morphological analysis. As well as presenting the different forms a unit or a concept has taken in different theories, this section discusses the extent to which they still represent valid tools for today’s research paradigms and, finally, assesses the long-term legacy of Lexicalism to morphology.

.. The lexicon In the introduction, Lexicalist Morphology was roughly defined as the ‘morphology done in the lexicon’. It is thus natural to start the survey of relevant concepts in morphology with a

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



discussion of this central concept. In most traditional treatments, the lexicon, intended as a set of declaratively listed primitives, is considered one of the two essential parts of linguistic knowledge, the other being the grammar, that is, the capacity of creating complex structures from the primitives provided by the lexicon. American structuralist linguistics clearly placed an emphasis on grammatical (computational) knowledge, since the lexicon constitutes a ‘list of basic irregularities’, according to Bloomfield’s (: ) oft-quoted statement. In early generative syntax the lexicon played no role at all, but was reintroduced by Chomsky as a theoretical device in Aspects of the Theory of Syntax (Chomsky ). The conception of the lexicon presented in Aspects is very similar to the Bloomfieldian one, as ‘all properties of a formative that are essentially idiosyncratic will be specified in the lexicon’ (Chomsky : ), although, as described above, Chomsky (: ) notes that internal computation is probably necessary in order to account for some lexical facts. As seen in §.., he confirmed this view some years later (Chomsky ), which eventually led to the formulation of the LH (see also ten Hacken, Chapter  this volume). The treatment of the lexicon proposed by Chomsky in Aspects is probably the source of the misleading confusion, which has been pointed out by some authors (e.g. Aronoff b: –; Aronoff : ; see also Gaeta a), between two different notions both given the label ‘lexicon’. In one of its meanings (L1), the lexicon traditionally corresponds to the list of primitives of linguistic knowledge and of their (possibly, but not necessarily) idiosyncratic properties. In its other meaning (L2), the lexicon is the place where (at least derivational, sometimes inflectional) morphological rules operate. L2 corresponds to what is today more commonly called the ‘morphological component’ or the ‘derivational component’ or simply ‘morphology’. It is clear that the goal of most theories within the lexicalist paradigm is the modelling of morphological competence, the locus of which is L2. They differ, however, concerning the relationship between this ‘lexicon’ and L1, and the difference, once again, goes back to Halle’s () and Jackendoff ’s () seminal papers. In Halle’s model, WFRs are generative devices producing complex forms from simple(r) ones. The set of WFRs forms a block which is fed by the morphemes contained in a list (the ‘List of Morphemes’) and which, in its turn, feeds the ‘Dictionary of Words’. Clearly, both the List of Morphemes and the Dictionary contain memorized and (partially) idiosyncratic items. Therefore, we can consider that in Halle’s model (and in subsequent variants of it) L1 is a subpart of L2, and that it is split into two parts, one (the List of Morphemes) preceding WFRs and one (the Dictionary) following WFRs (see the diagram presented in Halle : ). In a subsequent elaboration of Halle’s model, Scalise (: ) considers the split between the List of Morphemes and the Dictionary as redundant, and assumes that only the latter is necessary. Aronoff (: –) seems to follow the same path, although more implicitly, when he assumes that the inputs and outputs of WFRs are contained in the ‘Lexicon’ (which he also calls the ‘Dictionary’). In these models, all redundancies are eliminated from L1 (the List of Morphemes, the Dictionary), either as a tendency (e.g. in Aronoff ) or as a necessary property of the model. The same is true for several subsequent models (e.g. those of Lieber ; Selkirk ; Di Sciullo and Williams , but also for LMBM and Minimalist Morphology, see §..), in which, however, L1 appears to be an independent component which precedes and feeds morphological rules, rather than a subpart of it.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Jackendoff (), on the other hand, proposes a radically different view. First of all, he identifies a distinction between impoverished-entry and full-entry theories of the lexicon. The first group includes theories (such as Halle’s, but also structuralist and early generative morphology) according to which the lexicon (obviously L1, according to the distinction drawn here) contains all and only idiosyncratic information; theories of the second type (such as the one he proposes) admit that the lexicon may contain redundant information, and that the goal of morphological theory consists, among others, in measuring the degree of redundancy and of informativeness of lexical relations. Jackendoff ’s model does not assume a sharp distinction between L1 and L2, and uses the same formalism to represent lexical entries (words) and lexical relations (see Jackendoff : ). From quite a different perspective, more recent morphological theories inspired in part by cognitive linguistics, such as Construction Morphology (Booij a; Masini and Audring, Chapter  this volume) or Bybee’s () model, also reject the list/rule distinction and assume a model of the lexicon in which ‘rules’ are viewed as redundancy patterns between stored items. The distinction between full and impoverished models of the lexicon is an important issue for morphological, and more generally, linguistic theories, and remains a central topic of debate. The need for a non-redundant lexicon is still a basic prerequisite for generative approaches, such as DM, and to some extent for phonological theories, such as Optimality Theory. Results in psycholinguistic research as well as in acquisition studies (see Stemberger and MacWhinney ; Baayen, Dijkstra, and Schreuder ; Baayen et al. ; Jackendoff : – for an overview) strongly suggest that the lexicon does not necessarily store only non-redundant information and that other factors (such as frequency) are also fundamental. As a consequence, many of the current approaches to morphology following the lexicalist paradigm (and beyond) acknowledge that the information stored in the lexicon may be at least partially redundant (see, among many others, Zwicky ; Jackendoff ; Blevins ; Aronoff ; Booij a), including non-morphological objects (Di Sciullo and Williams ; Jackendoff ). A major challenge for morphology, according to this stance, is to account for lexical relations between fully specified units. It should be noted that in early works in Lexicalist Morphology the lexicon was a purely formal device, as in the generative syntax models on which these works were based. As a consequence, the view of the lexicon assumed, for example, in Halle () or in Jackendoff () is quite unrealistic and resembles the traditional prescriptive view. In particular, these authors implicitly assumed that it was possible to identify and characterize the set of ‘existing’ words of a language (see also Lieber : ), either from the point of view of a single speaker or from the point of view of the language in general (which in the generative spirit of the time coincided). Current approaches to morphology and to the lexicon, however, acknowledge that objects such as the ‘mental lexicon’ of individual speakers, the dictionary of a language, and the lexicon as a formal entity are clearly distinct. Moreover, easy access to corpora and to other linguistic resources has revealed far more words in language use than are dreamt of in linguists’ theories. The notion of ‘existing’ word has now been replaced by other, more theoretically and empirically accurate, concepts, such as ‘possible’, ‘virtual’, and ‘potential’ words, and the properties ‘lexical’ and ‘morphological’ are kept clearly and effectively distinct (on these issues see Bauer ; Gaeta and Ricca ; Rainer ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



.. Words As observed in §.., a divide quite quickly emerged in morphological studies between the proponents of morpheme-based and of lexeme-based models. Halle () may be considered the initiator of the first type of approach, while the second was introduced by Jackendoff () and developed by Aronoff () for word formation and Anderson () for inflection. Morpheme-based theories drew directly on generative syntax and structuralist analysis. Words were an important unit in traditional linguistic analysis until the first half of the twentieth century. However, starting from Bloomfield (), the word was no longer seen as a unit with a particular theoretical or empirical value, and the morpheme became the only primitive of morphological analysis (see Blevins  for a thorough historical overview of the notion of word). The same position was adopted, in less explicit terms, in generative syntax. In Aspects, for instance, the term ‘word’ is used only in an informal way, and the only morphological unit included among ‘theoretical terms’ is the morpheme (Chomsky : , fn. ). Advocates of word-based views put forward both empirical and theoretical arguments in order to refute the validity of the morpheme as a linguistic unit. The benefits of both points of view have been widely debated in the literature, and a criticism of the morphemic position is a typical starting point for many works in word-based morphology.10 In short, the main arguments of the anti-morphemic view centre on the fact that languages rarely display a one-to-one correspondence between elements of form and elements of content, and on the pervasiveness of such phenomena as fusion or distribution of exponence, meaningless morphs, and semantic features lacking formal expression. The first word-based models (e.g. Jackendoff ; Aronoff ) preferred to take words as primitives mainly on formal grounds. In later years, more attention has been paid to psycholinguistic and typological evidence in order to support a view of words as the basic units of linguistic categorization and of the construction of meaning in complex structures (Aronoff ; Blevins ). Today, it seems quite uncontroversial that a ‘word’ of some kind is a relevant unit for domains as varied as the study of typology, acquisition, diachronic linguistics, or language evolution, although this notion should probably be viewed as relative (for an overview see Hippisley ; for specifically typological issues see Dixon and Aikhenvald b; Haspelmath ). Although a number of approaches in early Lexicalist Morphology were morpheme-based and thus did not consider words as primitives of morphological analysis, this does not mean that words had no role to play in such models. Words are the outputs and may be the inputs of WFRs in Halle (). A similar conception is defended in Di Sciullo and Williams (), where morphemes are considered morphological atoms and words syntactic atoms. As far as lexeme-based approaches are concerned, WFGG was the work that most popularized the notion of lexeme, for which the author still used the term ‘word’. Aronoff conceived these units as signs relating formal (phonological), categorial (syntactic), and semantic information, a view already present in Chomsky () and quite similar to that expressed in Jackendoff (). As an illustration, in () I give the lexical entry of the item  according to Jackendoff (: ). 10 Among the most famous and classic criticisms of the morphemic view are Matthews (: ch. ), Aronoff (: §.), Anderson (: ch. ), and Stump (: ch. ). See also Blevins () for a recent criticism.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

()

/dec1d/ +V +[NP1__on NP2] NP1decide on NP2

This representation corresponds, with slight variations, to the format under which lexical units, and especially lexemes, are assumed to be stored in the lexicon in the majority of current lexicalist approaches, since it brings together phonological, syntactic (categorial and subcategorizational), and semantic information. The representation in () is a simplified version of the original one given by Jackendoff, which also includes an ‘entry index’, whose function is to allow reference to a word independent of its content, for instance to deal with cases of homonymy (a similar device is proposed in Spencer a). Subsequently, different approaches added extra layers to this representation format or expanded the existing ones. However, all approaches agree on the idea that the representation of a lexeme contains, at least, information on the three levels in question. Probably the main contribution of WFGG was the distinction (not clearly made in previous works) between words as atomic units for syntax (i.e. fully specified word forms) and words as abstract units (Aronoff : ). The latter concept corresponds to the notion of lexeme which is adopted today in most studies in Lexicalist Morphology. The term ‘lexeme’ to designate abstract words not carrying inflectional marks or features had been introduced by Lyons (: –) and progressively adopted in the field, for example by Matthews (), Zwicky (), Fradin (), and finally Aronoff (), who had chosen not to adopt it in WFGG. The distinction between word forms (words as syntactic units) and lexemes first sketched in WFGG has proven to be one of the most robust and fruitful innovations of Lexicalist Morphology. Among other things, it allows a consolidation of the distinction between inflectional and derivational morphology in a particularly elegant way: inflection deals with lexeme-internal relations, whereas derivation deals with cross-lexeme relations. Words (lexemes) are thus no longer seen as inert pairings of features, but as structured entities. This has led, on one hand, to a renewal of traditional notions, such as that of paradigm (see Blevins  for an overview and works cited in §..), and, on the other, to the redefinition of classical notions, such as ‘stem’, which is no longer defined simply in segmental terms, but now also in distributional terms (see e.g. Aronoff ’s  notion of ‘morphome’). More generally, it has inspired greater interest in the observation of intra-lexemic structures.

.. Minimal lexical units In spite of the development of word-based models, units smaller than words (in particular morphemes) have long continued to play a major role in Lexicalist Morphology. As seen in the previous sections, many models developed within Lexicalist Morphology were explicitly morpheme-based, a position which was inherited from structuralist morphology via generative syntax (Lieber ; Selkirk ; Di Sciullo and Williams ; and, more recently, Minimalist Morphology). Under a morphemic analysis, there is no distinction in principle between affixes and other units (e.g. the bases of WFRs): elements of both

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



categories are listed in the lexicon with the same feature specifications, the only difference being in their respective combinatorial properties. Compare, for instance, the representation for the verb  and for the suffix ‑ize in Lieber (: ): ()

run (phonological representation) semantic representation: . . . category: V[_]V insertion frame: NP_(NP) diacritics: [–Latinate]

-ize (phonological representation) semantic representation: causative category/subcategorization: ]N_]V insertion frame: NP_(NP) diacritics: Level II

As can be seen in (), both a full word and an affix include the same kind of information, identified in §.. as relevant for words: phonological, semantic, and syntactic (‘category/ subcategorization’, and ‘insertion frame’, roughly corresponding to argument structure). For , category information simply states that it is a verb; for ‑ize, the notation ]N_]V stands for a subcategorization frame defining it as a suffix that attaches to nouns and forms verbs.11 In many morpheme-based models, words (like  in ()) are distinguished from stems, units that can also be attached to affixes (cf. the examples serendip+i+ty, vac+ant, tot+al in Halle : ), but are not syntactic atoms. In morpheme-based models, the mechanisms of word formation (inflection and derivation) are quite simple, essentially the concatenation of morphemes, according to their combinatorial properties. The benefits of such models are clear, as they allow a single set of rules to be postulated for morphology and syntax. The categorization of complex words, for instance, is represented in terms of rewrite rules of the kind assumed to operate for syntactic structures. For instance, Selkirk (: ) proposes the representation shown in () for the plural form of the English compound apron string:

N [+plur]

()

N [+plur] N

N [+plur]

apron

string

Af [+plur]

-s

11 The ‘diacritics’ level contains information about the phonological and morphological combinatorial properties of morphemes, roughly along the lines of LP (see §..). Note that the term ‘subcategorization frame’ corresponds to a standard notion in generative syntax, introduced by Chomsky () to account for the different combinatorial properties of lexical items, and is used here by Lieber to represent the relation between the input and the output category of a set of derived words. For a criticism of this, nonstandard use of subcategorization see Fradin ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

The principles governing the combination of morphemes in the syntactic approaches to morphology discussed here may vary in some details. All, however, are based on mechanisms of combination and inheritance of the type exemplified. For instance, in Lieber’s () model a ‘percolation’ rule is responsible for the inheritance of the label of a lower node by a higher node. In particular, she proposes that a stem morpheme labels the first non-branching node dominating it, whereas an affix labels the first branching node dominating it. This mechanism, shown in (), conflates two different trees proposed by Lieber (: , example ()) to represent the structure of the English derived noun :

N

()

A happy

ness

+A

+N

As seen in the previous section, morphemic models, and especially the application of syntactic principles to morphology, have been questioned in many respects. For instance, the assignment of a syntactic category to affixes is not supported by any factor independent of the theory-internal necessity of having a uniform mechanism of category assignment for derived and underived words. Moreover, several authors have observed that in many cases derived words do not simply inherit their category and other features from the affixes they contain, and various modifications to Lieber’s percolation conventions have been proposed (see Scalise : –; see also the discussion on the notion of head in §..). Finally, it has been pointed out that such clear-cut cases as the one exemplified in () and (), where each node corresponds to a morpheme that is clearly distinguished both on formal and semantic grounds, are rare in languages. On the contrary, instances of non-concatenative morphological phenomena, such as stem modification, templatic or subtractive morphology, and syncretism are frequent and militate against an overly rigid vision of ‘morphemes-asthings’. Although the notion ‘morpheme’ was not completely dismissed in word-based models, it was either downgraded to a mere epiphenomenon emerging from linguistic analysis, showing no particular theoretical interest (see Bochner : ; Blevins ), or radically revised. Aronoff (: ), for instance, assumes that affixes do not exist independently of the WFRs introducing them, a position presented in fairly similar terms by Anderson (: ) in the domain of inflection: ‘what represents {} in sat is not the segment /æ/ but rather the relation between sat and sit, expressed as the process by which one is formed from the other’. Ultimately, the distinction between ‘morphemesas-things’ and ‘morphemes-as-rules’ corresponds to the traditional distinction between Item-and-Arrangement and Item-and-Process theories (see Hockett  and Janda  for a detailed discussion of this issue). Today, it is uncontroversial, in word-based theories, to consider affixes, along with any other formal marker of morphological relations, as mere exponents of inflectional or derivational rules, with no existence per se. Under this view, morphology deals essentially with relations between words (lexemes or forms of lexemes), rather than with the construction of complex forms from simpler elements. Morphemes

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



still play an important role in generative morphology, instead, and especially in DM (see in particular Marantz  for a recent discussion), although ‘morphemes’, in this theory, are much more abstract, as they correspond to functional projections rather than to ‘objects’ displaying a direct form–meaning correspondence.

.. Conditions on morphological rules Once rules were identified as the primary devices for word construction, especially in the field of word formation, the question of their scope and the restrictions they were subject to became a central issue of morphology. Accordingly, as discussed in §.., WFGG already assumed a rich system of constraints on the application of WFRs and of readjustment rules, several of which have been reviewed since, as have constraints proposed by other scholars in subsequent works. The goal of this section is not to give a precise account of these principles (for recent detailed overviews see Scalise and Guevara ; Gaeta b; Lieber ), but rather to discuss the changes these principles have undergone and to assess to what extent they can still be considered as valuable. One of the most discussed principles of WFGG is the one Aronoff labelled the Unitary Base Hypothesis (UBH). Of the restrictions to which WFRs are subject, categorial restrictions seem to be the most generalized and systematic, or at least the only ones for which a specific principle is identified. According to the UBH, ‘the syntacticosemantic specification of the base, though it may be more or less complex, is always unique’ (Aronoff : ). Thus, words formed by means of the suffix ‑able from verbs (acceptable) and from nouns ( fashionable) are considered to be the outputs of two distinct WFRs. Accepted as a basic principle of derivational morphology by many of the works that followed WFGG, the UBH has undergone several revisions, in order to account for apparent counterexamples (e.g. by Scalise : –). Today, the UBH is generally considered too rigid, and it is an accepted fact that base selection by WFRs cannot be determined on purely categorial grounds, but depends on multiple factors, including unpredictable lexical ones. It has been claimed, moreover (e.g. by Corbin ; Plag ), that the syntactic selection of bases of WFRs is a by-product of their semantic instruction. A symmetric principle, the Unitary Output Hypothesis (UOH), was proposed by Scalise (). In this case too, WFRs forming non-uniform outputs, either categorially or semantically (or both), are indeed quite common and documented even in well-described languages (see Mugdan : –). Another principle identified for WFRs in their earliest formulation reflects the idea that no matter how complex a derived word is, its structure is always binary. Aronoff (: ) expresses this principle as ‘one affix, one rule’ (Aronoff : ), while Scalise (: ) formulates it as the Binary Branching Hypothesis (see also Scalise and Guevara : – for examples). Although binary structures are generally adequate to describe the structure of complex words, there exist several examples of structures that apparently challenge a strictly binary view of morphological constructions. These include, for instance, so-called ‘bracketing paradoxes’ (atomic scientist, Williams ; Spencer ) and synthetic compounds (meat eater, see Fabb : –). In most cases, principles have been proposed in order to accommodate these facts within a strict binary view of morphological operations, for instance the non-matching of word structure at different levels of analysis. More recent approaches, such as Construction Morphology, on the other hand, admit the existence of

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

non-binary structures, and form–meaning mismatches in general, which result from either the conflation of different constructions (see, e.g., Booij c: – on synthetic compounds) or ‘second order schemas’ (see Booij and Masini ). The last two principles I will consider are directly linked to the LH, and particularly to the claim that syntax and morphology are distinct domains that are ‘blind’ to each other, a fact often referred to as the Lexical Integrity Hypothesis (see Lieber and Scalise  for a recent overview). The first principle is the so-called No-Phrase Constraint (first elaborated by Botha ), according to which only words can be the bases of morphological constructions, while syntactic constructions (apart from lexicalized ones, such as in every dayness, see Scalise : ) never can. The second principle, anaphoric islandhood, bans syntactic elements from being co-referential with one of the constituents of a complex word, a principle first formulated by Postal () (see also ten Hacken, Chapter  this volume). In both cases, empirical evidence has been adduced which suggests that these principles should be abandoned, or at least attenuated. The principles in question seem to capture tendencies of linguistic systems rather than rigid laws, and their (non-)application depends on multiple factors. Evidence against the No-Phrase Constraint includes real cases of derivation of syntactic units (see Carstairs-McCarthy ) and inflection inside derivation (see Booij , ), but also incorporation and other cases of systematic syntax/ morphology mismatch (see Harris ). Examples of word-internal anaphora have been noted and discussed in the literature (see Lieber ; Ward, Sproat, and Gail , and, for an overview, Montermini ), and provide further evidence indicating that an accurate account of the phenomenon cannot be provided without taking usage-related factors into consideration.

.. Head The final notion elaborated in early Lexicalist Morphology and worthy of mention is that of ‘head’. This notion was directly imported from syntax and was thus especially popular in those (morpheme-based) models that sought a syntactic account of morphological facts. The first and most famous definition of morphological head is the one provided by Williams (), who defines heads as the elements, within a complex word, that share their distributional and presumably semantic properties with the whole word. In Williams’ model, heads are defined positionally according to a principle he famously labelled the Righthand Head Rule (RHR). According to this view, suffixes are always heads in complex words, prefixes are never heads in such words, and in compounds the head is the rightmost constituent, as the examples in () show (heads are given in bold): ()

-ionN ! instructionN instructV ! reinstructV barN, tendV ! bartendV

This view of the head was clearly too simplistic and English-centred and was easily refuted on the basis of simple empirical observations, such as the fact that in many languages (e.g. the Romance languages) the element that determines the syntactic and semantic properties of compounds is the leftmost one. Moreover, Williams’ definition is too rigid, in that it does

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    : 



not make any distinction between classes of elements. It has been observed, in fact, that there are at least two categories of suffixes that cannot be qualified as heads according to the definition given above, namely evaluative and inflectional suffixes (on the latter see in particular Scalise ). Taking these problems into account led Williams himself and other scholars to review and modify the notion of head, often in the direction of a relativized version of the notion (see Selkirk : ; Scalise ; Di Sciullo and Williams : –; Scalise and Fábregas ). More radical criticisms of the relevance of this notion for morphology overall were formulated within a few years after Williams proposed it (see Janda : –; Zwicky c; Bauer ). Among the arguments put forward by these authors are the fact that feature-sharing is not a necessary condition for the categorization of a larger unit by one of its constituents, the fact that the notion of head is compatible with a morphemic view of morphology but not so with a process-based one, and, finally, the lack of independent arguments to justify the attribution of a lexical category to affixes (such as ‘noun’ to ‑ion in ()). Currently, most models within the lexicalist framework consider the notion of head pertinent only for compounding, where, at least in the case of endocentric compounds, it is uncontroversial that one element generally determines the syntacticosemantic properties of the whole (although even in this case it is possible to find exceptions, see Bauer : –). The notion is relevant, on the contrary, in those models like DM, for which the syntax/morphology analogy is still an issue.

. C: L 

.................................................................................................................................. A correct assessment of lexicalist morphological models, in particular in the early period, should both take into account the types of data and the theoretical tools available at the time of its elaboration and also consider the changes morphological research has undergone during the last four decades that have led to a great diversification of approaches and methods. One of the reasons for this change is, of course, the overall theoretical development of linguistics as a discipline; the other, perhaps more crucial, is morphologists’ interest in ever larger and more diverse sets of data. Many such datasets come from typological research, others (real-life or experimental) from related domains such as historical linguistics, psycho- and neurolinguistics, acquisition studies, and computational linguistics.12 Finally, the development of computer-based linguistic research gave access to datasets of unprecedented dimensions and quality, a fact that necessarily had a tremendous impact on studies on morphology and on the lexicon in general (see Hathout, Montermini, and Tanguy  for an overview). The injection of such a mass of new data had an impact on several traditional notions and mechanisms that had been elaborated within the framework of generative Lexicalist Morphology, such as ‘(im)possible’ and ‘existing’ words, ‘blocking’, the action of restrictions on WFRs, and the very notion of rule. The deterministic nature of traditional WFRs has been questioned, and the co-occurrence of different outputs from the same base lexeme (either with different affixes or the same affix)

12

For recent surveys on methodological issues in relation to (derivational) morphology, see in particular Baayen () and Lieber (b).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

has been acknowledged as a fact morphological theory should account for. Roché and Plénat (: ), for instance, have recorded the forms obamien, obamaien, obamanien, obamatien, obamasien, obamalien for the relational adjective constructed from the name Obama in French, and treat this variety in terms of constraint ordering, with basederivative faithfulness constraints prevailing for obamien and obamaien and phonological constraints prevailing, for instance, for obamatien. More generally, morphological and lexical competition has become a legitimate and fruitful domain of investigation (see Aronoff , a). Many current morphological approaches can be directly or indirectly associated with Lexicalism. Apart from the identification of a morphological component independent from other linguistic domains, most of them share some basic assumptions, such as the idea that words (lexemes), rather than morphemes, are the basic units of morphology; that affixes and other formal devices are different in nature from lexical items and cannot be dissociated from the rules introducing them; and that derivation and inflection constitute two separate though interacting domains, dealing, respectively, with cross-lexemic and intralexemic relations. Among these approaches are Construction Morphology (Booij a), Network Morphology (Brown and Hippisley ), Paradigm-Function Morphology (Stump ; Spencer a), each of which is dedicated a chapter in this volume, and the set of studies that can be grouped under the label ‘morphology by itself ’, as in the title of Aronoff ’s () seminal book. These include studies on the structure of lexemes and paradigms, from both synchronic and diachronic (e.g. Maiden ) points of view, and in particular on the properties of so-called ‘morphomes’, purely morphological entities of paradigmatic organization (see the articles contained in Luís and Bermúdez-Otero ). Other approaches are less directly linked to the lexicalist framework and the generative paradigm it came from, but nevertheless share some fundamental ideas, such as wordbasedness or a view of affixes as exponents rather than lexical objects. This is the case for the model of canonical morphology developed by Corbett (a, ) (see also Bond, Chapter  this volume), which has its roots in typological studies, and also of models inspired by cognitive and functional linguistics, such as that of Bybee (). In conclusion, although Lexicalism cannot be considered a morphological theory proper, the set of models and approaches that can be gathered under this label has proven to be one of the most fruitful and effective in giving impulse to research in this field of study and in fashioning the domain of morphology. Many of the issues currently debated in morphology and of the key ideas in use cannot be fully understood without reference to the theoretical ferment that emerged and grew during the s, following Chomsky’s LH, and to the work of such researchers as Halle, Jackendoff, Aronoff, Anderson, Lieber, and the many others who have followed them.

A I am grateful to Mark Aronoff, Louise Esher, Bernard Fradin, Rochelle Lieber, Julie Rouaud, Pius ten Hacken, and Anna M. Thornton for their valuable comments. Louise and Julie should also be thanked for having reviewed the English text. I am also deeply indebted to Jenny Audring and Francesca Masini for inviting me to contribute to the present book, for their useful comments, and for their great patience.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

  ......................................................................................................................

 

. I

.................................................................................................................................. D Morphology (DM, Halle and Marantz ) is a formal framework for grammatical theory that was developed in  as a response to the prevalence of Lexicalism in the Chomskyan tradition of generative grammar (beginning with Chomsky ). Much like the Minimalist Program (Chomsky ), with which DM is associated, DM is not actually a theory of grammar. Rather it is a framework of assumptions upon which theories of both morphology and syntax are built. The key assumptions are: (a) that syntax manipulates morphemes as its terminal nodes, not fully formed “words”; (b) that the morphophonological expression of words is not present throughout the derivation but is rather inserted into a fully derived syntactic structure to realize that structure; (c) that listemes1 compete with each other to express those features; and (d) listemes are underspecified for the environments they can appear in. In terms of morphological theory, DM assumes: (a) the item-and-arrangement approach to morphological study; (b) the morpheme-based hypothesis (see Aronoff ); (c) the single component hypothesis; (d) a realizational model of morphology (Beard ; Koenig ; Stump ); (e) an underspecification approach to morphological features; and (f ) a competition-based model for realization. In this chapter, I closely examine each of these assumptions as I describe the framework, its background and current state, its strengths and weaknesses, and its potential future.

1 “Listeme” is a term coined by Di Sciullo and Williams () to mean a memorized correspondence between form and meaning.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. B

.................................................................................................................................. While certainly a model of morphology today, Distributed Morphology is actually much more of a response to developments in syntactic theory than it is to those in morphological theory. In the s and s, in response to Chomsky (), “Remarks on nominalization”, which argued that unproductive morphology is done by a component of the grammar that is separate from the syntax,2 the path of syntactic theory (and morphological theory) moved towards a Lexicalist model. A Lexicalist model assumes that there are two components of the grammar responsible for word and phrase formation. Though some non-Lexicalist models persist (most notably Generative Semantics; Lakoff ) during this time, this period sees the rise and dominance of Lexicalist models of syntax, including models such as Government and Binding Theory (G/B, Chomsky ), Lexical Functional Grammar (LFG, Bresnan and Kaplan ), and Head-Driven Phrase Structure Grammar (HPSG, Pollard and Sag ; for an account of morphology in HPSG/LFG see Nordlinger and Sadler, Chapter  this volume). During this time there is no consensus as to the nature of the lexicon as a generative component of the grammar. For example, G/B is a mostly Weak Lexicalist3 model while LFG and HPSG are Strong Lexicalist4 models, and in morphological theory Halle () is Strong Lexicalist and Aronoff () is Weak Lexicalist. However, it is fair to say that mainstream research into generative grammar is generally done from within Lexicalist assumptions (i.e. it assumes a separate component for word formation).5 In the late s and early s, Lexicalism came to see increasing criticism. Much of this criticism sets the stage for DM, so I will cover the most prominent here, beginning with the Mirror Principle from Baker (, ). The Mirror Principle says that “morphological derivations must directly reflect syntactic derivations and vice versa” (Baker : ). Put more transparently, if two or more syntactic processes have morphological reflexes, the order in which the morphological processes are applied (from inside to outside) must be the same as the order of the syntactic processes. While there are many possible explanations for this mirroring, the simplest is to assume that the syntactic component is not separate from the morphological component. Very soon after, Lieber () proposed a model of syntax that also fully functioned as a model of morphology. Her central claim was that the rules of X-bar theory could easily be adapted to headed structures in morphology (such as compounding and affixation). In the meantime, Anderson (, ) develops a model of the syntax that, while Lexicalist, is what is today called late-insertionist or realizational. A realizational model of morphology and syntax is one where the words of a sentence are not present throughout the syntactic derivation. Rather, words come in after the derivation is complete to express (or realize) the syntactic features. In Anderson’s (, ) model, a separate lexicon Please see ten Hacken (Chapter  this volume) for detailed discussion of Chomsky (). Weak Lexicalism assumes that inflectional morphology is a function of the syntactic component while derivational morphology is a function of the lexicon. 4 Strong Lexicalism assumes that all morphology is a function of the lexicon and only the lexicon. 5 Please see Montermini (Chapter  this volume) for detailed discussion of the development of Lexicalism from  on. 2 3

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



operates in parallel to the syntactic component, which operates entirely on formal features. The lexicon ultimately produces a paradigm of inflected forms from which the syntax selects the ideal form to express the combinations of features it has derived. This is very much the perfect setting for the rise of DM. Like Lieber’s () model, DM completely rejects the idea of a lexicon and instead assumes X-bar rules apply all the way down to the internal structure of words. Like Anderson’s (, ) model, DM is a realizational model, where the “words” are inserted into an utterance after the syntactic derivation. In §.., I sketch the DM model of the grammar completely. First, however, since the Lexicalism debate is central to development of DM, in §.. I quickly discuss Lexicalism.

.. Rejecting Lexicalism Chomsky () is often considered to be the genesis of Lexicalism and is treated as such in much of the Distributed Morphology literature, so I begin there. Chomsky () proposes a generative lexicon in response to two critical problems that morphology has for syntactic grammar: variable productivity and idiosyncratic meaning. For Chomsky (), the transformational syntactic component must be completely transparent and regular. The concern for Chomsky is ultimately derivational morphology, which is very seldom completely productive (i.e. a given derivational affix, with few exceptions, cannot appear with any given stem) and often creates idiosyncratic meaning. To illustrate this, Chomsky contrasts gerund constructions in English (such as destroying) with what he calls “derived nominalizations” (such as destruction). Importantly for Chomsky (), gerunds appear to be completely productive and maintain syntactic elements in derived phrases (e.g. John’s growing of tomatoes) while derived nominalizations are often unproductive or idiosyncratic and do not preserve syntactic relationships (e.g. John’s growth of tomatoes—the meaning of growth here is noticeably not the same as the meaning of growing above!). For Chomsky () these differences had clear repercussions: the deriving of a gerund phrase from a verb phrase was clearly the output of a syntactic transformation similar to passivization (transformations were the mechanism that powered syntactic operations at the time) while instead the relationship between the verb grow and the noun growth is not grammatical, but rather lexical. Since a transformation that could derive nominalizations would be far too powerful and unconstrained, the nominalization is derived from the verb through a lexical rule. This has the effect of creating a generative component of the grammar whose job it is to create unproductive and semantically opaque forms. Productive morphology, such as passivization and gerunds, would remain in the syntactic component in this model. Di Sciullo and Williams () is also considered a major work in justifying Lexicalism, though it comes to very different conclusions from Chomsky (). Di Sciullo and Williams () argue that the justification for Lexicalism is chiefly two-pronged: (a) syntax and morphology involve fundamentally different processes, such as long-distance dependencies and headedness and (b) words are indeed the atoms of syntactic processes, many of which cannot see inside the structure of a word. However, the work does make a compelling argument against Chomsky (). Chomsky’s () argument for Lexicalism is chiefly that words behave unsystematically. DiScullio and Williams () show that this unruliness is not limited to words. Indeed, it exists at every level of the grammar—from

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

submorphemic, such as sound symbolism, to whole lexicalized phrases. Unruliness is not a property of morphological operations, but rather the property of listedness. Certain correspondences (typically form–meaning correspondences) must simply be memorized. Di Sciullo and Williams () show that the set of listed entities is in no way a natural class. All they have in common is that they are unruly. Since these listemes are not a natural class, listedness is not a compelling argument for Lexicalism. On the other side of the debate, Lieber () stands as one of the essential sources for data that seem to falsify the Lexicalist hypothesis. To Lieber (), there is a class of data that cannot be categorically said to be either syntactic or morphological. Additionally, there is a class of data that is predicted to not exist if the lexicon indeed feeds the syntax and the syntax cannot see into the output of the lexicon. I discuss the most prominent of these here. Lexicalism in its strongest form argues that syntax can neither affect nor see the internal structure of words. Two types of data that Lieber () discusses seem to refute this. The first is sub-lexical co-reference. This is the phenomenon where the syntax refers anaphorically to an indexed element that is embedded within morphological structure and ought to be opaque to the syntax (e.g. I still consider myself a Californiain though I have not been therei for years). The second is phrasal compounding: this is the phenomenon where the dependent members of a compound are not single lexemes but are instead fully formed syntactic phrases (e.g. off-the-rack look). If indeed the lexicon feeds the syntax, phrasal compounds should not exist. Lieber () showcases clitics and particle verbs as phenomena that apparently belong to both morphology and syntax. The most famous example of the relevant type of clitics is the English possessive marker. The possessive marker is phonologically bound to whatever is left-adjacent to it. However, its distribution strongly suggests that it is heading a phrase that takes noun phrases as its complement and specifier (e.g. the mayor of New York’s office). It is also in complementary distribution with other determiners (c.f. *the John’s car). Their being phonologically bound suggests that clitics are affixes. However, they behave like syntactic elements in every way save separability. Particle verbs, such as give up, while idiosyncratic in meaning, are separable in English (I gave it up). In Dutch and Old English, however, particle verbs show a peculiar behavior that strongly suggests that they belong to both the domain of syntax and that of morphology. In Dutch SOV subordinate clauses, the particle typically surfaces as a prefix (dat Hans zijn moeder opbelde ‘that Hans his mother up-called’6). However, in the V matrix clauses, the particle is separated from the verb via V verb raising (Hans belde zijn moeder op ‘Hans called his mother up’). That Dutch particles function as affixes in subordinate clauses and as separate words in superordinate clauses strongly suggests that there cannot be a categorical distinction between morphological and syntactic phenomena. Marantz () makes two different classes of arguments in defense of DM and against Lexicalism. The first is fairly well known, having appeared throughout the literature on morphological theory: that the domain of “word” is not definable. The second is that the grammatical architecture that led to Lexicalism (i.e. Transformational Grammar) is no longer practiced, so the concerns are no longer relevant.

6

Example from Booij (c) which offers a different account for Dutch particles.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



To the first class of arguments, Marantz () argues that three typical qualities are given to the domain of the word which justify a special status and thus warrant a separate component of the grammar. Those are (a) special sound, (b) special meaning, and (c) special structure. Marantz () claims that (a) the domain of word in syntax does not align with the domain of word in phonology (which can include clitics but exclude some affixes), (b) idiomatic meaning is not limited to the domain of words, and (c) complex morphological structure cannot have a simplex interpretation (transmission cannot mean just part). To the second class of arguments, Marantz () argues that the concerns from Chomsky () are largely about deriving one phrase (a complex NP) from another (a complex VP) via transformation. However, by , generative grammar had moved past deriving one form from another via transformation. Rather all the different forms are base-generated via different applications of X-bar rules. Also, Lieber () had already convincingly shown that X-bar theory could be employed for both syntax and morphology. Recall that Chomsky’s () concern is that transformations deriving complex nominals from verb phrases would be too powerful. Marantz () observes that since syntactic theory does not employ transformations of any kind, that concern is moot. All transformations were too powerful and have been abandoned. In effect, Marantz () claims that the original arguments for Lexicalism are outdated and so is the idea of Lexicalism.

.. The model of DM DM is a framework of morphological study within the larger model of Principles and Parameters (P&P). Thus, the majority of the architecture of DM is inherited from P&P. When DM was developed in , the dominant model of syntax within P&P was G/B. G/B assumed the general model of the grammar seen in ():

DS (D-Structure)

()

SS (S-Structure)

(Phonological Form) PF

LF (Logical Form)

In this model, DS is the underlying representation, and SS is the derived representation that is the output of the syntactic operations. SS is sent to be mapped to a phonological representation (PF) and to a semantic representation (LF). What DM contributed to this model is also what lends its name to the framework. DM distributed the load of the morphology, which previously was assumed to feed DS, across the entire derivation. In Distributed Morphology, DS, SS, and LF are only hierarchical nested structures whose terminal nodes carry no phonological material (they are just bundles of formal features) and completely unordered from left to right. The linear order and the spell-out of those features is partially the result of morphological operations that occur between SS and PF.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

This is true of words in the abstract sense, but is also true of individual morphemes. In short, the distinction between prefix and suffix is completely missing at DS, SS, and LF and only comes to exist between SS and PF. In this way, DM is a radical departure from Lieber (), which argued that linear order of morphemes and words in phrases is derived from the principles of X-bar. This added step of morphological processes between SS and PF mandated a fifth level of representation, Morphological Structure (MS), that was operated on by processes that are morphological in nature rather than syntactic or phonological. This new structure is seen in ():

DS

()

SS

(Morphological Structure) MS

LF

PF MS was the locus of those procedures that were outside of the domain of the syntax but still affected the ultimate PF. Chief among these procedures was the addition of new terminal nodes to the derivation. A typical example of this type of added terminal node is AGR (Agreement). AGR has no interpretable features as a part of the syntactic structure. On the other hand, as a part of MS, AGR captures the function of morphological agreement via a copying operation that fills the new terminal node from material elsewhere in the derivation. The opposite of this addition of terminal nodes is what is called “impoverishment” (Bonet ). Impoverishment is the deletion of formal features at MS and is primarily employed to account for syncretism (especially syncretism that falls outside of predictable natural classes; see §..). Another set of morphological operations at MS manipulates the ordering and content of terminal nodes. This set includes head movement, morphological merger, fusion, and fission.7 Head movement had been a staple movement in P&P for a long time and its being outside of the core syntactic operations was nothing new to the proposal of DM. Its inclusion in MS simply gave it a home. Morphological merger is the merging of two adjacent terminal nodes (or heads) into one complex head. Fusion is a process that takes all the combined features of a complex head and combines them into one simplex head that carries all the features previously contained in the complex head. Fission is the opposite of this: features in one head are split off and spread across many. The most important morphological operation in DM is Vocabulary insertion. Vocabulary insertion is the mechanism through which a Vocabulary item (VI), a correspondence

7

Like impoverishment, none of this set were originally proposed in DM; see Baker (), Koopman (), and Noyer () among many more.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



between a set of formal features and a phonological string (i.e. a listeme), comes to express the formal features generated by the syntax. It is because of this mechanism that DM is a realizational model of morphology. In DM, Vocabulary items compete for insertion into a complete syntactic derivation. When a Vocabulary item is inserted into a derivation, it removes all the interpretable features and replaces them with phonology. Insertion is such an important part of DM that I have dedicated all of §.. to it. Finally, there are some operations that are post-insertion, linearization being the most obvious.8 These processes are considered PF processes because they occur after insertion, which is itself a PF operation. Affixation under adjacency (the process through which an affix attaches to a stem) is another such case. Perhaps the most significant of this class of morphological operations is “readjustment”. A readjustment rule replaces the phonology of a Vocabulary item. For example, the phonology of the stem mouse is readjusted to mice in the environment of plural. Readjustment, when combined with competition, is the main means through which DM accounts for stem allomorphy, which I detail in §.. Because DM is closely attached to P&P, when that model has changed, so too has DM. With the adoption of the Minimalist Program (Chomsky ) and later Phase Theory (Chomsky ), P&P abandoned the intermediate syntactic stages of DS and SS in favor of the spell-out model. Syntactic derivation ultimately ends at LF with no intermediate stages. At an intermediate stage (or several) in the incomplete derivation the utterance is sent to PF to be pronounced. This point is called “spell-out” (). ()

Spell-out

PF

LF

DM follows suit and abandons MS as well. Now the morphological operations do not constitute their own stage of a derivation but are simply those after spell-out. Additionally it adds a final structure called the Encyclopedia. The Encyclopedia is the final stage of the derivation where LF and PF again come together. The Encyclopedia is another list like the Vocabulary. The Encyclopedia carries our extra-linguistic (or real world) knowledge about the structural and lexical content of the derivation (such as the membership of a particular root in an idiom or special meaning that a word gains in a given discourse environment). Given this function, the Encyclopedia may be considered extra-grammatical. For the purposes of this overview, however, it is simpler to assume it is grammatical. The basic structure of the DM grammar is seen in Figure .. There are a few variants of DM that assume linearization comes before insertion (Embick ; Arregi and Nevins ). 8

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  Morphosyntactic Features: [+N] [+singular] [3rd person]

Syntactic Operations

Spell-out

Morphological Operations Vocabulary Insertion: /kæt//-s/

Phonological Form

Logical Form

Morphological Operations Encyclopedia: Non-linguistic Knowledge-Little furry thing, likes to sleep on my face.

Conceptual Interface (Meaning)

 .. The basic structure of the DM grammar Source: Modified with permission from Siddiqi (); originally created by Meg O’Donnell.

. M  DM

.................................................................................................................................. Having set up the background and basic model of DM, I now turn to its features as a model of morphology. The main aspects of DM that I focus on in this section are as follows. Section ..: DM assumes the morpheme-based hypothesis as opposed to the word-based hypothesis (see Aronoff ); §..: DM is an item-and-arrangement model of morphology as opposed to an item-and-process model (see Hockett ); §..: DM is an underspecification model (see Archangeli ); and §..: despite being primarily an arrangement model, DM also employs morphological rules. I devote the lion’s share of §. to §.., which discusses realizational morphology, the most distinctive feature of DM.

.. Morpheme-based morphology In contemporary morphological theory, there are two fundamental distinctions between classes of theories. The first of these distinctions is what the theory takes to be its atoms. Following Aronoff (), there are two choices: morphemes or words. A word-based theory takes words as the inputs to morphological processes which then output other words. For example, the input to the plural rule would be cat and the output cats. In such a

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



model, the ‑s phonology that differentiates the two forms has no actual standing. That is, affixes are not independent of the word-form, but are rather just the collection of sounds that are different between the two forms. On the other hand, a model that assumes morpheme-based morphology inherits a fair number of strengths and weaknesses that go along with it. The strengths are largely twofold: (a) a morpheme-based morphology allows a concatenative model of morphology, which I return to later in this section, and (b) there is very strong evidence for the existence of morphemes. For example, there are different phonological and phonotactic constraints on morpheme boundaries than there are on the inside of morphemes (e.g. a geminate at the morpheme boundary in unnatural is acceptable, but one in banner is disallowed). As a consequence, morpheme-based approaches need to be able to identify the component morphemes in a derived complex word. In several cases this is difficult (where is the plural morpheme in feet?), and in others it means we identify morphemes that have no meaning (what does were mean in werewolf or cran in cranberry?). DM proposes a variation on the morpheme-based approach. DM certainly does not manipulate words, but in fact what is manipulated by the grammar is much smaller than morphemes. In DM, the generative component of the grammar manipulates abstract formal features, bundles of which together are realized by VIs.9 In reality, DM is not a morphemic model, but rather a submorphemic model. Indeed, the term “morpheme” has no real standing in DM. A “minimal sound–meaning correspondence” in DM is a Vocabulary item, which is better described as a listeme rather than a morpheme, as it is never actually manipulated by the generative component. However, since the relevant distinction is that words are not treated as atomic but rather contain separable parts, for practical purposes DM is a morpheme-based model.

.. Item-and-arrangement The second major distinction in morphological theory is between item-and-arrangement and item-and-process models as described in Hockett (). An item-and-arrangement model assumes that morphology is limited to concatenation (similar to the Chomskyan view of syntax). Morphology takes two morphemes and sticks them together to create a complex entity. In item-and-arrangement, an affix is concatenated with a stem to make a word (or two words are concatenated to make a compound). In an item-and-process model, morphological processes are replacement algorithms that take the phonology of the input and replace it with different phonology to represent the added meaning. These replacement algorithms are often called rules. This distinction comes with more than a few differences in predictions. Foremost, the item-and-arrangement model predicts that complex words ought to have a clearly segmentable affix10 that has been attached to the left or right of a stem. On the other hand, item-and-process models predict that what appears to be concatenative morphology has 9

In DM, the formal features are called morphemes. Since this terminology would be confusing in this Handbook, I will not adopt it here. 10 This segmentability must also be true of various forms of compounding—such as compound nouns, noun incorporation, and serial verbs—where there is stem–stem concatenation.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

the same likelihood of appearing as any other morphological process (such as infixation, truncation, stem allomorphy, etc.) because there is no restriction on the output of rules. Both predictions are clearly false. It is plainly the case that non-concatenative morphology exists, but is a strikingly small minority of the world’s morphology.11 The primary reason to assume the item-and-arrangement model in light of these different predictions is grammatical elegance and restrictiveness. The item-and-arrangement model employs only one generative process for morphology (concatenation), and it is a simple, highly restricted process. This makes the model of morphology highly parsimonious. Furthermore, an item-and-arrangement model can assume concatenation as the only process for both morphology and syntax. Thus, there is only one generative grammatical component that does the work of both the syntax and the morphology. This hypothesis makes the overall model of the grammar more parsimonious. In general, as scientists, we want to make as few assumptions as we can to explain natural phenomena.12 Assuming fewer mechanisms and fewer grammatical components, by this measure, is preferable to assuming more than one component and several types of rule. Limiting morphology to one simple process also restricts the model’s power. Restrictiveness is also generally considered necessary of a scientific theory, because a very powerful model which accounts for any possible data loses its ability to explain what does not occur in nature. For example, a rule that can replace the sounds in go with the sounds went is powerful enough to generate any imaginable correspondence between input and output. Clearly the universals of language do not function in such an in unrestricted manner, so in general, a model whose power is limited to the observable facts is preferable to one that is so powerful that it overgenerates. Furthermore, if a model has to restrict the power of its rules through stipulation of some sort, that power loses its explanatory value. Ultimately, the goal of scientific theory is to offer explanation. Restriction is typically assumed to be a means to that end. Like parsimony, restriction is typically to be preferred until sacrificing it increases a theory’s explanatory power (not just its descriptive power). Because of this difference in elegance and restrictiveness, item-and-arrangement models are still preferred by many practitioners despite their weaker empirical coverage. For these reasons, item-and-arrangement models typically have to have some way of dealing with non-concatenative morphology. In particular, these processes need to be treated as if they are concatenative. For example, conversion (e.g. Let’s table this proposal) is treated in these models as the affixation of a null (or empty) morpheme to a stem. Similarly, stem allomorphy such as in mouse/mice or PROduce/proDUCE is treated as the effect of a null affix that conditions a phonological change. Templatic morphology, such as that found in Semitic languages, needs to be treated as a variety of infixation, which itself needs to be either a prefix or a suffix that becomes an infix for phonological reasons. DM,

11 This is not necessarily an impasse though. There are diachronic reasons for why there is a larger proportion of concatenative morphology. Concatenative morphology is the result of grammaticalization while non-concatenative morphology is the result of a phonological contrast being preserved after the conditioning environment has been bled (for example, the voicing difference present today in bath/bathe exists because it was previously conditioned by a schwa that is still spelled but no longer present). The former is simply much more likely to happen statistically (see Stump  for further discussion). 12 The philosophy of science guideline described here is called the Principle of Parsimony which is also known, and widely misunderstood, as Occam’s Razor.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



as a model that assumes the single component hypothesis, is ultimately an item-andarrangement model and so inherits all of its strengths and weaknesses.

.. Underspecification One persistent issue in ongoing morphological research is syncretism. Syncretism is a oneto-many form-to-meaning correspondence (i.e. one form corresponds to several different meanings). For example, the English present tense copula agrees with its subject. Am is grammatical only with first person singular subjects, and is is grammatical only with third person singular. On the other hand, the form are occurs with all three persons in the plural and with second person singular. There are several different ways to treat this pattern. One is that there are four different words are which are coincidentally (synchronically) homophonous. Another is the opposite: there is only one word are and it is compatible with all four environments. A prominent way to accomplish the latter type of analysis is to argue that are does not mean these four different combinations: • • • •

First person plural subject, present tense copula. Second person plural subject, present tense copula. Third person plural subject, present tense copula. Second person singular subject, present tense copula.

Rather, all are means is the shared meaning among those four. In this way are only means: • Present tense copula. Then the English present copula reduces to three different items, even though there are six different possible environments (a). ()

a. English present tense copula st person

Singular

Plural

[st] [sg] [present]

[st] [pl] [present]

nd person [nd] [sg] [present] [nd] [pl] [present] rd person [rd] [sg] [present] [rd] [pl] [present] Singular Plural st person

am

are

nd person are

are

rd person is

are

am: first person singular, present tense copula is: third person singular, present tense copula are: present tense copula

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Because English has a word available to express first person singular, it always uses am there. Similarly, it always uses is in third person singular environments. However, in all other environments, English employs are because are is the least specified for the environments it can appear in. For this reason, are is called the “elsewhere” or default form. It can go in so many environments because it is underspecified for the environments it can appear in. In underspecification models of grammar, this elsewhere form is prevented from being used in all environments by a constraint on the grammar that requires that the form that best fits the environment is selected. Underspecification models of grammar have a particular complication. In general, underspecification models avoid positing two homophonous and nearly synonymous forms for reasons of parsimony. Syncretisms that do not easily pattern in natural classes are difficult to accomplish with underspecification without such a stipulation. For example, returning to the English copula, the past tense were (b) is clearly the elsewhere condition, but was has this distribution: • First person singular subject, past tense copula. • Third person singular subject, past tense copula. This distribution is not a natural class since the second person singular is missing from was just being singular, and first and third person do not share a common underlying feature (such as participant). In this case, DM employs an impoverishment rule which deletes the [singular] feature from the environment of second person (c). ()

b. English past tense copula (Halle and Marantz ) st person

Singular

Plural

[st] [sg] [past]

[st] [pl] [past]

nd person [nd] [sg] [past] [nd] [pl] [past] rd person [rd] [sg] [past] [rd] [pl] [past]

Singular Plural st person

()

was

were

nd person were

were

rd person was

were

c. Impoverishment of number in English second person [#] ! NULL / [nd][v]

After the application of the impoverishment rule, the distribution of the two past tense copulas is easily accounted for with underspecification: was is the singular form and were is the elsewhere form (d).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ()



d. English past tense copula redux Singular st person

Plural

[st] [sing] [past] [st] [pl] [past]

nd person [nd] [past]

[nd] [past]

rd person [rd] [sing] [past] [rd] [pl] [past] was: singular, past tense copula were: past tense copula

.. Rules Despite being primarily an item-and-arrangement model of morphology, DM does employ a fairly large number of idiosyncratic rules (as seen with impoverishment in (a–d)). As mentioned before, these come in a wide variety of types. Most common are rules that change feature structures in particular environments prior to Vocabulary insertion. Impoverishment rules, for example, remove features in certain environments to condition insertion. Rebracketing rules (e.g. see Radkevich ), for another example, change the syntactic hierarchical structure of a complex head into a different hierarchal morphological structure. Fusion and fission are also rules that change the hierarchical structure generated by the syntax. The rules that apply after insertion are much more familiar as morphological operations. For example, a readjustment rule conditioned by the plural environment changes the stem mice to mouse after both have been inserted. Similarly sleep is readjusted to slep- in the environment of past tense. An excellent example of the power of these post-insertion rules is the analysis of the classic case of French determiners shown in Embick (). In French de and à preposition phrases, the determiner either fuses with the preposition (au jus ‘with the juice’ rather than *à le jus) or it cliticizes to the noun (de l’arbre ‘of the tree’ rather than *de la arbre) depending on whether the following noun is vowel initial (cliticizes to the noun) or consonant initial (fuses with the preposition).13 Embick () captures this data by proposing two different readjustment rules. The first readjusts the phonology of la to l’ and converts it to a pro-clitic. The second fuses the determiner with de or à to result in du and au. These rules are crucially ordered as the clitic rule bleeds the fusion rule. These rules, impoverishment and readjustment rules in particular, create a curious conundrum for DM, which, as mentioned above, is an item-and-arrangement model of morphology. Readjustment rules clearly belong to item-and-process morphology and thus inherit the weaknesses of item-and-process morphology: they are extremely powerful and unrestricted.14 In this way, DM sacrifices two main strengths of item-and-arrangement, its restrictiveness and parsimony, for the weaknesses of item-and-process models. Fusion and fission raise another concern: that is one of “look-ahead”. In short, fusion only applies when 13

The hiatus-avoiding cliticization is independent of the presence of a preposition. It happens to bleed the fusion. 14 For example, a readjustment rule must be employed in DM to derive thought from think, which involves almost complete replacement of the phonology.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

the Vocabulary contains a portmanteau item that requires two heads to fuse, but fusion precedes access to the Vocabulary, so this information is not yet available at the relevant stage. In light of these concerns, a significant amount of discussion revolves around the presence or absence of these processes in DM. Trommer () was the first to propose a model of DM that was completely free of rules of these types, leaving insertion as the only morphological process. Noyer (), Halle (), Bobaljik (, ), Caha (), Siddiqi (), Starke (), Radkevich (), Bermúdez-Otero (, ), Embick (), Harley and Tubino Blanco (), Harley (), Haugen and Siddiqi (, ), Alexiadou (), Haugen (), and Svenonius (), are some examples that engage the inclusion of these processes in morpheme-based realizational models such as DM.

.. DM as a realizational model DM is a realizational model of morphology, which assumes that morphology expresses contrasts rather than generates them incrementally (see Beard ; Stump ). In particular, DM is a lexical-realizational model of morphology according to Stump’s (, c, and Chapter  in this volume) taxonomy of morphological theory.15 The DM equivalent to the lexicon (i.e. the location of stored form–meaning correspondences) is the Vocabulary. The Vocabulary is a passive list of listemes, each of which is simply a correspondence between a set of features and a phonological string. Vocabulary insertion is the process through which one of these items comes to replace the features in a terminal node and express (or realize) that formal material. Insertion in DM is competition-based. In abstract terms, every VI (Vocabulary Item) is competing to be inserted into every terminal node. The most important constraint on insertion to determine which VI wins the competition is the Subset Principle. The Subset Principle mandates that the VI that expresses the greatest subset of the features in the target node is the one that wins the competition. To return to the English copula example, we can assume that the relevant node in the sentence She was happy contains the following features following head movement and other similar morphological operations: [v] [rd person subject] [singular subject] [feminine subject] [past tense] We can assume the candidates in this competition are seen in ().16

15

The relevant contrast here is that DM is a morpheme-based realizational model while models such as Paradigm Function Morphology (PFM) are word-based, which Stump (, c) refers to as inferential-realizational. Stump (Chapter  this volume) discusses this at great length, so I point the reader to that chapter. Please also see Stump (Chapter  this volume) for a description of PFM. 16 Whether or not the copula expresses [v] is not relevant here, so I assume it just for clarity’s sake.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ()

was were are am is be been



[v] [past][singular] [v] [past] [v] [present] [v] [first person] [singular] [present] [v] [third person] [singular] [present] [v] [infinitive] [v] [participle]

Of those candidates, was and were are the only two that do not contain features conflicting with those in the target node, so they are the only two that are not excluded from the competition. Of those two, was contains the most features of the target node and is chosen for insertion. However, I have purposely made this competition much simpler than it actually is. To illustrate this, I want to quickly turn to another set of data before I return to the English copula. The Spanish determiner makes for a very good example of insertion17 because it is maximally contrastive for gender, number, and definiteness ().18 ()

masculine, singular definite el indefinite un

feminine, singular

masculine, plural

feminine, plural

la una

los unos

las unas

This seems to suggest, since there is no syncretism, that the feature specification of the eight Vocabulary items is pretty straightforward. Each item would be specified for precisely the three features it realizes. This would give the specifications seen in (): ()

el la los las un una unos unas

[D] [masculine] [singular] [definite] [D] [feminine] [singular] [definite] [D] [masculine] [plural] [definite] [D] [feminine] [plural] [definite] [D] [masculine] [singular] [indefinite] [D] [feminine] [singular] [indefinite] [D] [masculine] [plural] [indefinite] [D] [feminine] [plural] [indefinite]

However, most theories of morphological markedness and underspecification assume that in a contrast such as masculine/feminine or definite/indefinite, only the marked member of the contrast is actually expressed with a feature; this is for simplicity, elegance, and economy reasons. For the purposes of illustration, we can assume here that [indefinite], [feminine], and [plural] are the marked features.

17

Siddiqi () also uses these data. I would like to thank Jaime Parchment, who helped me to compile these data, and an anonymous reviewer who helped me determine the underspecification pattern for this chapter. 18

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Another complication to the features described in () is that standard agreement hypotheses assume the agreement features are not in the determiner node, but are rather in the terminal node with the noun.19 This means that the Vocabulary items select for, and express, features which are elsewhere in the derivation. This type of selection is called secondary exponence. The Vocabulary items in () are the primary exponents of the features in the D node ([D] and [(in)definite]), but they are also secondary exponents of some features found in the nominal terminal node (the noun would be the primary exponent of these features). In other words, insertion of the Vocabulary items is primarily conditioned locally but is also conditioned by features in the surrounding derivation (typically assumed to be constrained by a c-command relationship). These features which the Vocabulary items express secondarily are typically indicated with (). Thus, a more accurate description of the Spanish determiners, considering the above constraints, is seen in (). One thing that becomes clear is that one candidate is still the elsewhere candidate even in a maximally contrastive set. Another thing that becomes clear is that insertion must be cyclic. Since agreement features such as gender and number are tied to VIs that are inserted in the nominal position, they must have already been inserted in order for the insertion of the determiner to make reference to those features. ()

el la los las un una unos unas

[D] [D] ([feminine]) [D] ([plural]) [D] ([feminine]) ([plural]) [D] [indefinite] [D] [indefinite] ([feminine]) [D] [indefinite] ([plural]) [D] [indefinite] ([feminine]) ([plural])

I return now to the example of the English copula. For the same reasons we would limit the features in the case of the Spanish determiner, we would also do so for the features of the Vocabulary items in (), giving us a slightly different set of features seen in (). ()

19

was were are am is be been

[v] [past] ([singular]) [v] [past] [v] [v] ([first person]) ([singular]) [v] ([third person]) ([singular]) [v] [infinitive] [v] [participle]

Though, you could also assume that they are copied into the D node after spell-out. I assume here that they have not been for purposes of illustration only.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



With this refined feature specification, there are now four candidates competing for insertion in She was happy, which we recall contained the following features in the relevant nodes (features present in c-commanding nodes are indicated with parentheses): [v] [past tense] ([rd person subject]) ([singular subject]) ([feminine subject]) Those four candidates are was, were, are, and is. The candidates were and are are eliminated from the competition because better specified candidates exist, but the competition still results in a tie: is and was both realize three of the five relevant features. In an underspecification model, ties of this sort are likely to occur, so the model must have a mechanism for breaking the ties. The original proposal by Halle and Marantz () simply stipulated that the Vocabulary was crucially ordered. In such a system, the tiebreaker is that the winning candidate is ranked higher in this crucial ordering. Later, Noyer () proposed that formal features exist on what he named the Universal Hierarchy of Features. In this model, a VI that realizes features higher on the hierarchy is the one to be selected. In the example of English copulas, tense outranks subject agreement, so in the tie of was and is, was is the selected candidate (it expresses the more important feature combination). Much of the ongoing discussion about insertion revolves around precisely what parts of the tree are realized by VIs. Recall, the original DM proposal limits insertion to terminal nodes in the syntax. This not only requires secondary exponence as discussed above, but it also requires a fair amount of morphological processes to be dedicated to moving features in and out of terminal nodes (such as fusion and fission). Largely on concerns of parsimony, restrictiveness, and elegance, many different proposals that change the targets of insertion have arisen in the last few years. The most prominent is Nanosyntax, which (among several other key distinctions, such as spell-out driven movement, the superset principle, and the anchor condition; see Starke ; Caha ) argues that entire syntactic structures are realized by Vocabulary items. Insertion begins with a terminal node and works up the tree cyclically. As insertion reaches a larger structure that can be expressed by a different item, that new item is inserted through a process called Cyclic Override. For example, bring might realize only the root node while brought would realize the root node and all of the syntactic structure above it up to and including the past tense node. Thus, brought is inserted to replace that entire syntactic structure, including the previously inserted bring. Noticeably less ambitious an overhaul is the proposal by Radkevich (). Radkevich () proposes that insertion can happen at non-terminal nodes as well as at terminal nodes. Somewhere in between is Svenonius’ () proposal of Spanning (from Ramchand’s  proposal), which argues that insertion can replace spans (Williams ; Melnar ), or parts of the complement line of a tree that does not necessarily contain all the material c-commanded by that tree. These are just some of the ongoing variations proposed to the mechanism of realization/insertion currently ongoing within the literature, showing that the structure of insertion is constantly changing.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. T     DM

.................................................................................................................................. The interface with other components of grammar is typically one of the most interesting parts of a model of linguistics. In this case, however, there is not much to say. Morphology and syntax are the same component and with developments in P&P such as they are, semantics is increasingly being done by the same component as well. The Encyclopedia (the interface of the LF and PF) is, to my current knowledge, not explicitly described much, if at all, in the literature. On the other hand, the interface with the phonology in DM is well developed in the literature. As I mention above, a great many of the morphological processes that happen in DM, in particular Vocabulary insertion and those processes that follow it, are said to happen at PF. Strictly speaking, this makes them phonological processes. This means that the interface of the phonology and the morphology/syntax is well developed, though importantly that interface is not especially articulated with any theory of phonology. In recent years there have been several attempts to articulate Distributed Morphology with Optimality Theory (Prince and Smolensky ). Embick’s () monograph is ultimately about rejecting a harmonic approach to morphology and phonology, while on the other hand Trommer (), Bye and Svenonius (), and Haugen () argue for an explicit interface between DM syntax and Optimality Theory (OT) phonology. This is very attractive to Distributed Morphology. Because DM is an item-and-arrangement model, it is very good at providing accounts for local processes. However, it struggles with non-local processes. Interfacing with an Optimality Theoretic phonology allows DM to avail itself of one great strength of OT, which is dealing with non-local phonological effects (such as reduplication patterns).

. M    DM

.................................................................................................................................. In this section, I take just a moment to discuss the state of major morphological issues in DM. Sections .. through .. are brief looks at the issues of the derivation/inflection split, productivity, and blocking. Section .. is a more elaborate account of the difference between content morphemes and functional morphemes as the division is significant in DM. Section .. is a long look at the phenomenon of stem allomorphy in DM, which is the focus of much recent discussion in the morphological literature.

.. Derivation vs. inflection The main distinction between derivational morphology and inflectional morphology is that derivational morphology derives new lexemes while inflectional morphology generates different word-forms. However, lexemes do not exist in any meaningful way in DM. This is because DM, as a syntax-based (also called Root-based) model of morphology, does not assume a categorical distinction of “word”. “Words” are epiphenomenal in DM. By

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



extension, lexemes, which are groups of word-forms that are all the same “word”, are also epiphenomenal. In DM, all the different “words” that share a common stem (i.e. a word family) are simply the result of different functional syntax projected above the Root. Thus a distinction between a derived form that represents a different lexeme from the stem (catty) and an inflected form that represents a different word-form from the stem (cats) cannot exist in DM. Furthermore, the idea of “stem”, which is a common underlying element that is manipulated by morphological processes, does not exist in DM (in the way that it does in stem-based models of morphology). As an effect of this difference, the typical definitional distinction between derivational morphology and inflectional morphology cannot be relevant. Even Anderson’s () criterion that inflection are those processes that are relevant to the syntax cannot be relevant in DM because all pre-insertion morphological processes in DM are syntactic and are thus relevant to the syntax. Indeed, the only distinctive feature that could still be appropriate in DM is that derivational morphology is always inside of inflectional morphology. Yet, in DM there is no theoretical reason for a distinction between two types of morphology as all morphological heads are ordered by the syntax, so precedence could not entail a categorical difference. While this lack of distinction has merit since the distinction has been shown to be largely unsupportable throughout the literature (see Anderson  and Di Sciullo and Williams  for discussion), Marantz () accounts for the surface generalizations that gave rise to the distinction (locality of derivational affixes, possible idiosyncratic form, and meaning from derivational meaning, etc.) by arguing that they are an effect of the interaction of locality and phases. Marantz () argues that so-called derivational morphology occurs inside of the first phase (the category-determining phase: a, n, v, etc.),20 while inflectional morphology occurs outside that first phase.

.. Productivity Productivity has no explicit role in Distributed Morphology. Many of the phenomena attributed to productivity ultimately reduce to the licensing of insertion in DM. For example, ‑ity is less productive than ‑ness because ‑ness is the elsewhere condition in that particular competition. ‑ity is simply better specified for the environments in which it can appear. In this sense, there are no affixes that are more productive than others; there are simply larger and smaller distributions of the environments that affixes can appear in. The other morphological rules such as readjustment rules vary similarly. For example, returning to the example of the French determiners, in a handful of environments, the fusion fails where it ought to appear (e.g. de la mère “of the mother”). This is not a function of the rule in DM, which like all rules in syntax must be completely productive, but rather a function of cyclicity and rule ordering (see Embick ). Readjustment rules can appear to vary in productivity (such as the rule that changes the vowel in drive–drove which has

DM inherited the concept of “little v” from Minimalism and came to assume that “little v” is the syntactic equivalent to a verbalizer in morphology. This was extended to adjectivizers (little a) and nominalizers (little n). Ironically, “big V” and its ilk have since come to have no status in DM, eliminating the need for “little v” to be contrastively lower case in the first place. Thus, the fact that categorizing heads are indicated with lower case letters in DM is now just a quirk of history. 20

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

extended to dive–dove), but this is an effect of the rule simply having a larger set of conditioning environments. In short, since DM is based on Minimalist syntax and every process in Minimalist syntax is assumed to be completely productive, no processes in DM vary in productivity. Consequently, differences in productivity in DM are entirely an effect of the conditioning environment for the rules or items being inserted or an effect of ordering or cyclicity effects. The effects of productivity on the DM grammar have very recently re-entered the discussion in DM. See, for example, Punske () on regularity and Haugen and Siddiqi ().

.. Blocking Blocking is the process through which a productive process is stopped from occurring either because another less productive process applies to that stem (*foots is blocked by feet) or because the application of the process would result in a synonym for another word (fury blocks furio(u)sity) (see Aronoff  for detailed discussion). Similar to productivity, blocking has no formal mechanism in Distributed Morphology. Embick and Marantz () argue that there are two crucial assumptions that need to be made in order to assume that blocking even exists as a phenomenon that needs explaining. The first of these assumptions is that blocking involves some competition between two “words”. The second is that ungrammatical words like foots exist in some way that they can even be considered. Embick and Marantz () argue that neither are good assumptions in DM. First, in DM, fully formed words do not compete with each other in any articulated way. Rather, competition happens at the level of the morpheme. At no point in the grammar is foots constructed where it can then compete with feet for insertion. Rather, the derivation with foot and the plural feature can only be resolved one way: with feet. This is because the zero morpheme is better specified for the competition than the ‑s that was competing with it for insertion. Furthermore, the readjustment rule that changes the phonology of foot to feet is going to happen in any derivation with both foot and the plural feature. There is no possible derivation where this rule fails to apply. This means that *foots does not exist in the grammar in any way. It cannot compete with feet because it has not been constructed. Even if it were grammatical, fully formed words are not stored in DM. It simply is not present to compete with feet. Thus blocking has no theoretical standing in DM because forms like *foots are simply ungrammatical derivations like *Me kick ball. They fail to exist because they are ungrammatical.

.. Functional vs. lexical Unlike productivity and blocking, the contrast between functional (or grammatical) and lexical (or content) morphemes (called f-morpheme and l-morpheme in the DM literature; see Harley and Noyer ) is crucial in Distributed Morphology. In DM, the fundamental difference between an l-morpheme and an f-morpheme is that an l-morpheme realizes a root (indicated with a √). A √ is a formal feature that the grammar manipulates like any other formal feature. The crucial difference is that √ links to an extragrammatical concept. This fundamental difference has a number of ramifications on the model of the grammar.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



In the early models (see for example Harley and Noyer ), roots in DM only come in one variety: √. All Vocabulary items that realize roots (l-morphemes) realize the same root. That is, both cat and dog express √. As far as the syntactic component of the grammar is concerned, cat and dog mean precisely the same thing. The crucial ramification of this— one that is not readily obvious—is that the words cat and dog cannot be competing with each other. They would always tie for insertion. For this reason, l-morpheme VIs in the original proposal do not participate in competition. There simply are innumerable possible grammatical realizations of any given derivation that contains a √ (which would be most). The fact that l-morphemes do not participate in competition initially gives rise to the obvious concern that not all l-morphemes are created equal. L-morphemes rather famously have subcategorization frames giving rise to radically different grammaticality from one environment to the next. Of course, this information is nothing new and has motivated differences in the lexical entries of words since as early as Aspects (Chomsky ). The solution as first proposed in Harley and Noyer () is that l-morphemes are licensed for insertion by the surrounding environment, usually by c-commanding material. For example, put is limited to insertion into verbal environments that contain a subject, an object, and a locative argument. In the years since Harley and Noyer (), several proposals have been put forth to explain the nature of √. One significant reason is stem allomorphy. In the original proposal, mice cannot be a listed item that competes with mouse for insertion because roots do not compete for insertion. An approach that attempts to have mouse and mice listed as separate VIs with their insertion conditioned by different licensing quickly runs into one of two problems: () *mouses and mice exist in parallel or () mice defeats *mouses but it also defeats rats (and cats . . . and dogs . . . and all regularly inflected nouns—see Siddiqi  for discussion). This means that the only available mechanism for treating stem allomorphy is readjustment rules. It also means that mice is fundamentally not its own listeme, despite evidence in the literature that irregulars are indeed listed (I point the reader to Baayen  et seq. in particular, but the claim is as old as Aronoff ). In the original proposal of DM, mice has no status other than as the result of a readjustment rule that changed /aʊ/ to /aɪ/. An early proposal to suggest that not all l-morphemes realize the same generic root feature √ is Pfau (, later published as ). Pfau’s claim is that psycholinguistic data, in particular speech error data, convincingly suggests that we know the conceptual content of an l-morpheme we are using as we construct an utterance. His suggestion is that specific roots such as √, √, and √ are used by the grammar rather than just a generic √. However, at first blush this proposal has the side effect of suggesting that roots are semantically individuated: a position that has been long rejected (see Arad ; Borer  among others). Harley (), following Acquaviva (), proposes a variant that does not have this problem. Harley suggests that roots are indeed universal in that they do not carry any semantic meaning but that there are many of them and they are indexed (i.e. arbitrarily labeled). This indexing links the phonological realization (given the context) to the semantic realization (in the same context). In this way, cat and dog do not express the same √. Rather, cat realizes √, while dog realizes √. This proposal allows l-morphemes to participate in competition without the problem of universality. This proposal seemed to have caught on as, for most of the recent literature, DM analyses used specific roots in their derivations, though often without explicitly mentioning if this was a conceptual choice or just shorthand for the morphological stem.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

A third competing proposal that exists in parallel to Harley and Noyer () and Harley () is the one proposed by Embick (). Embick () argues that the l-morphemes are fundamentally different from f-morphemes in that l-morphemes do not participate in insertion at all. Rather, l-morphemes are in fact roots in the traditional morphological sense and their phonology is present throughout the derivation. The primary purpose of this proposal is to allow the phonology of the roots to influence the syntax. This is a fairly radical departure from the literature of DM. It is assumed by Embick and Noyer’s () overview of DM, but has not gained tremendous traction in the interim (perhaps because the practical difference between Harley  and Embick  is not significant in most analyses).

.. Allomorphy As discussed above, because l-morphemes do not participate in competition in the original proposal of DM, allomorphy (i.e. variant forms of the same morpheme) is different for l-morphemes and f-morphemes. F-morphemes compete with each other for insertion. In this way the Spanish el and un in () in §.. are f-morpheme allomorphs because they are both competing realizations of the same formal features. In the case of affixes, this is easily seen in the case of the past tenses burnt and turned. The non-assimilating ‑t and the regular ‑ed are competing VIs that both realize past tense, so they are f-morpheme allomorphs. Roots on the other hand rely on readjustment rules to condition their allomorphy. Returning to the example above, mouse is readjusted to mice in the environment of plural. However, mice is not itself an exponent of plural. Rather, the plural is expressed with a null Vocabulary item that defeats ‑s in the environment of mouse. In this way, mice is marked twice for plural, once by the affixed zero morpheme and once by the ablaut in the stem. This effect of being marked twice is a bit easier to see in slept. The past tense morphology is realized by the regular past tense marker (though in this case it is spelled differently) but there is also ablaut that was conditioned by the past tense. This is called double marking in the original Halle and Marantz () proposal and it is a key prediction of Distributed Morphology. DM predicts double marking because there are two ways that allomorphy can occur: through competition and through readjustment. This generates four different patterns of allomorphy (). ()

played burnt slept dreamt

regular stem + regular affix regular stem + irregular affix irregular stem + regular affix irregular stem + irregular affix

The majority of the debate over the several types of insertion in lexical-realizational models is about stem allomorphy. Models that propose some kind of insertion outside of terminal nodes (such as Nanosyntax) allow for sleep and slept to be separate Vocabulary items. This is especially appealing in cases where there is very little common phonology between the stem and the irregularly inflected form such as think/thought and seek/sought. Of course, it is even more appealing in cases of stem suppletion (where the stem is replaced entirely) such as go/went and person/people. The models with non-terminal insertion allow for bring

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



and brought to be stored separately and to compete with each other for insertion. They also obviate the need for most readjustment rules, which, recall, are quite powerful and out of place in an item-and-arrangement model of morphology. However, these models with insertion at non-terminals lose the double marking prediction, which has always been taken to be a strength of DM (see Embick ).

. F 

.................................................................................................................................. In general, Distributed Morphology is thriving right now. DM has shown impressive longevity and popularity during the last twenty years. It is consistently used as a framework to couch new analyses of interesting morphological data. New theories of morphology and its interfaces are developed and debated within DM regularly. It benefits from and adapts to the developments in Minimalism and Phase Theory quite readily. It now has a chief competitor (amongst models that assume a syntax-based morphology) in Nanosyntax that forces practitioners of DM constantly to question some of the fundamental assumptions of the framework. The future of DM seems to be in several key areas of research. One is that new accounts of phenomena from across the world’s languages need to continue to be developed in DM to further test and refine its cross-linguistic applicability and develop it as a model of UG. Another is that DM needs to continue to make conceptual (or metatheoretical) changes. The primary of those to my mind is a simplification of the basic model of the grammar and a reduction of power. As mentioned above, the basic tenets of DM are restriction, elegance, economy, and parsimony, all features of a scientific model that we value and expect. However, these strengths are bought and paid for with processes such as impoverishment and readjustment, which are fundamentally unrestricted and overly powerful. The future of DM, or of the competitors in similar realizational theories (such as Nanosyntax), seems to be the reduction of the morphological processes available to the model, restriction of its power, and the simplification of the grammar. Another key area of future research is the restriction of the secondary processes such as impoverishment (see Arregi and Nevins  for such a restriction). Finally, one area that remains effectively untouched is the Encyclopedia and its interface with the grammar. Interpretation of a given derivation in DM is crucially reliant on the Encyclopedia, especially with regard to complex non-compositional meaning and context-dependent meanings, but the Encyclopedia remains largely underdeveloped. This underdevelopment stands out as a key area DM research needs to focus on.

A I would like to thank Ash Asudeh, Antonio Fábregas, and an anonymous reviewer for comments and feedback on drafts of this chapter. As usual, all mistakes are my own.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

    ......................................................................................................................

  ˊ 

. M   M  ..................................................................................................................................

T write an overview of the influence of the research program known as Minimalism (Chomsky ) is complicated by two factors. The first one is that, to the best of our knowledge, nobody has yet stated a complete and coherent theory of morphology that is explicitly Minimalist.1 There are, however, Minimalist traits in the work of several authors: for instance, some central aspects of Distributed Morphology are designed to be compatible with the Minimalist program, and monographs such as Ackema and Neeleman () share with Minimalism the central hypothesis that the interfaces with phonology and semantics play a crucial role in restricting the possible structures generated by Merge. From here it should be clear that this chapter will not present a theory—in the sense that Natural Morphology (Gaeta, Chapter  this volume) or Construction Morphology (Masini and Audring, Chapter  this volume) are theories—but will review the connections of the core ideas of Minimalism with some recent morphological research. In doing so, we will discuss what kinds of questions Minimalism poses to morphology and how they are being explored in the work of different authors. The second complication—a more serious one—is that Minimalism, being a research program (Lakatos ), defines a wide framework for inquiry and suggests some core hypotheses that have received very different instantiations in the work of different authors, but is not a full-fledged theory with a set of tenets and proposals (as, for instance, 1 Wunderlich and Fabri () have a theory labeled ‘Minimalist Morphology’. However, they take Minimalism in a methodological sense: as a principle that reduces the number of units and operations necessary to analyse a set of data, and as a hypothesis about the kind of assumptions that the child makes in the acquisition of a language. Minimalism in this sense is a core ingredient of all scientific research, and as such it does not distinguish, per se, any theory, so we will leave this methodological Minimalism aside in the chapter.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



Government and Binding was for syntax). For this reason, inside Minimalism there are different currents that focus on distinct aspects, and sometimes have views that are radically opposed to each other. Consider, as an illustration, the current state of the art on one central issue that any morphological inquiry is concerned with: variation. Authors that work within this program2 have opposing views as to where variation should be placed within the architecture of grammar. Some—for example Richards ()—want to restrict variation to the Phonological component (Phonological Form—PF) of grammar, leaving the Computational System (CS) free of parameters; others advocate the need for parameters in the CS (Baker ), meaning that some restrictions and operations might be different across languages; yet others accept only parameters in the lexicon—microparameters—instantiated as minimal differences in the properties of heads that force structures to look different, although the operations are the same across languages, because lexical units have to be combined differently (Kayne ). All these approaches are Minimalist in some sense, but their application to morphology would give very different results. In attempting to explain a run-of-the-mill case of morphological variation—such as the distinction between VN compounds (Italian lava-piatti ‘wash-dishes, dish washer’) and apparently equivalent NV-suffix compounds (English dish wash-er)—the PF approach would lead us to look into stress-assignment, prosodic constituents, or the morphophonology of the pieces involved. The CS approach, in contrast, would advise us to consider independently motivated differences in the way Italian and English restrict head–complement relations, conditions under which verbs assign case to their objects, or selectional restrictions about which categories can function as arguments. Finally, the lexical approach would imply looking at how verbs, nouns, and determiners are defined in each one of the languages, the availability of possible phonologically empty heads in Italian but not in English, etc. Depending on the assumptions one makes about what a Minimalist solution to specific problems is, a Minimalist theory of morphology will look very different. Similarly, the place of the lexicon is another point of disagreement. Chomsky’s work since  assumes that the lexicon is prior to syntax, while DM or Nanosyntax splits it into a pre-syntactic component— morphosyntactic features—and a postsyntactic one—morphophonological units. Despite all these complications, it is also true that morphology has played a significant role in the evolution of transformational approaches. Chomsky () devoted some space to the morphology of the auxiliary system of English, and Chomsky () based his critique of Generative Semantics on deverbal nominalizations. The nature of the atoms of syntax and the information they contained has remained a central issue in Minimalism. Let us begin. The Minimalist Program can be characterized through the three properties presented in §.., §.., and §.. (see Chomsky , , a, b; Hauser, Chomsky, and Fitch ).

2

In the narrow sense that Minimalism is considered in this chapter, it is presupposed that authors whose work is Minimalist operate inside the wider framework of Generativism. The work that we will review here is a proper subset of the research done inside transformational approaches, and as such we will leave aside morphological theories such as Construction Grammar or OT-morphology which in fact share some traits with Minimalism.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

.. Language as an optimal solution to the externalization of thought The core hypothesis is that language is a ‘perfect’ system (Freidin and Vergnaud ), in the sense that CS, the component that combines units into structures, is an optimal solution to satisfy the constraints imposed by two external systems: the Conceptual-Intentional (CI), which is responsible for thought and meaning, and the sensorimotor (SM), which is responsible for turning thought into a physical signal that can be perceived. In other words, language is an optimal solution to the problem of how to externalize thought in the form of a physical signal. Given this hypothesis, linguistic research concentrates on identifying the conditions that are imposed by CI and SM (Lasnik and Lohndal : –), and how they derive from more general principles—the so-called third factor, §...

.. Third factor explanations As a consequence of §.., the Minimalist Program takes as the starting hypothesis that the linguistic principles that CS follows should derive from conditions imposed by the external systems. Case Theory, Theta Theory, and Binding Theory (Chomsky ), for instance, are no longer viewed as components of CS; Minimalism attempts to explain what used to be understood as their effects as the result of constraints imposed by the system of thought or the system of externalization of thought. In this sense, CI and SM behave as filters: structures that, in principle, could be produced by CS become ungrammatical because they do not meet some constraint imposed by CI or SM. In this sense, Minimalist work could be understood as a way to rationalize the principles identified during the Government and Binding period, by trying to derive them from deeper principles of CS or the interfaces of CS with the external systems (Hornstein ; Lasnik and Lohndal ). In this context, the discussion about the three factors that are involved in human language (Chomsky : ) becomes relevant. These are: (i) genetic endowment; (ii) experience; (iii) principles not specific to the faculty of language. Genetic endowment, which is assumed, on biological evidence, to be practically uniform for all members of our species, roughly corresponds to Universal Grammar; experience corresponds to the specific, particular linguistic data that children are exposed to during acquisition, and is the factor responsible for variation, as factor (i) is assumed to be invariable. Finally, the third factor includes all principles that do not belong to the linguistic capacity proper: the conditions imposed by the externalization component, general principles of cognition, or even physical principles that dictate how particles interact with each other; we do not have a clear idea of what these factors should be, and part of the research concentrates on trying to identify them. The hypothesis that language is an optimal solution to externalize thought leads to explanations based on this third factor. Whenever possible, Minimalist research reduces

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



principles that had been proposed as part of CS or its interfaces to the result of more general principles that are not specific to language (see Gallistel and King  for the psychological ground of this hypothesis). Let us present an illustration: during the Government and Binding framework, a family of principles that had to do with restricting formal relations between constituents to very local environments had been proposed: head movement constraint (Travis ), relativized minimality (Rizzi ), shortest move (Chomsky ), minimal link condition (Chomsky ), etc. Minimalism proposes that there is no need to assume that Universal Grammar (UG) comes with any of these principles; rather, their effects derive from general principles of optimal computation not specific to language. One such principle is, roughly, “keep the space where a relation between two elements has to be computed as small as possible”. From this principle, it automatically follows that local relations will be preferred to non-local ones, as the latter implies that more structure has to be considered. This principle is applied to language, but also to other cognitive domains, such as arithmetic calculations and general problem solving: if you want to toast your bread and there is a functioning toaster in your kitchen, you will not go to your neighbour’s to borrow her toaster, unless you have ulterior motives not satisfied by your own toaster. The starting hypothesis is, thus, that a linguistic phenomenon should be explained by third factor principles, and the opposite view, that it is due to some principle of UG proper, can only be adopted on the face of overwhelming evidence.

.. Impoverishing UG A direct consequence of §... is that UG, the genetic linguistic endowment that allows our species to have language, is viewed in Minimalism as almost empty: in contrast to the wide variety of syntactic principles identified during Government and Binding, the hypothesis in Minimalism is that UG must be extremely simple, perhaps reduced to two operations: Merge, which takes two sets of elements and builds a set by combining them—with movement being just a subcase of Merge—and Transfer, which sends a built structure from CS to the interfaces (see Gallego  for different views on the role of Transfer and Gallego  for a specific proposal). For some authors this is all there is to UG (Boeckx ), and all other principles and restrictions on linguistic structures derive from third factor principles. In contrast, others (such as Chomsky ; Pesetsky and Torrego ; Adger and Svenonius ) argue that UG must contain at least another operation: Agree, which involves matching one property of a set with a corresponding property of another set. This operation can have as a result morphological agreement, and as such its status will concern us later (§.).

.. The lexicon in orthodox Minimalism Before we get into the theories of morphology that can be argued to be Minimalist, some clarifications have to be made with respect to the nature of the lexicon in the Minimalist syntactic works. In these works, the lexicon is a repository of atoms that come endowed with a set of properties, formalized as features. These are the units that CS combines together into

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

structures, but CS does not have direct access to the lexicon: the elements of the lexicon that will be combined are first copied into a Numeration (Chomsky : –), and then CS maps these units into a structure that is read by the two interfaces. At any point in the derivation that proceeds towards well-formed outputs, the only units and properties that can be represented are those that are contained in the numeration, and their features; no new objects are added during the computation, so the role of CS is to rearrange the lexical atoms. This is known as the Inclusiveness Condition (Chomsky : ). Notice, however, that this proposal is quite neutral with respect to the role of morphology in the system. There are two aspects that are, in principle, not determined in the previous view: (a) what counts as an atom? and (b) what kind of features are placed in this lexicon? One answer to these questions takes us to a strong Lexicalist model (Lieber, Chapter  this volume; Montermini, Chapter  this volume, cf. Halle ). Interestingly, this is Chomsky’s view, which has been consistent across the years since Chomsky (). Chomsky’s proposal was essentially that word formation was not transformational and that morphologically complex structures are stored in the lexicon. His evidence for that is well known, and includes both the idiosyncratic aspects of word formation (e.g. that depth exists, but *blackth does not, something that Chomsky proposes should be listed lexically) and the cases where the alleged internal components of a complex word cannot be accessed by syntactic rules, described at least since Postal () and which later gave rise to the Lexical Integrity Hypothesis (Lapointe ). Thus, for Chomsky, lexical atoms are whole words, and those whole words are specified at least with respect to their grammatical category. One clear illustration of this lexicalist view comes in Chomsky (: , irrelevant details omitted):

IP

()

I

VP John

V’ V met

Bill

In this structure, the form met, which corresponds to a particular verb in the past form, is directly projected as V; at some point in the derivation it will establish a relation with I, where its past tense feature will be licensed by syntax (in English, via LF-movement). Note: it is not the case that met is the spell-out of a complex syntactic structure that includes as separate atoms a verb and some tense information. The form is introduced, fully inflected, in the syntax and the syntactic derivation has to license, one by one, each one of the properties with which the form is associated in the lexicon. This is not just Chomsky’s view in Minimalism. Lasnik () also proposes that the atoms in Minimalism are fully inflected. Another elaboration of Chomsky’s proposal is presented in Freidin (): the verb comes fully inflected from the lexicon, and once in the syntax its past tense feature projects as I (T in Freidin’s notation; : –). There has been, however, much less work done on how far idiosyncratic information lexical atoms would carry in this Lexicalist-Minimalist system; it has to be the case, however, that these features only play a role to the extent that they satisfy an interface condition (cf. §.).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



There is a second answer to the two questions that we formulated at the beginning of this section, and it takes us to Constructionist theories formulated inside Minimalism, where the lexical atoms are (sets of) abstract features, sometimes corresponding to morphemes and sometimes corresponding to submorphemic units. Word formation would be just one of the possible outputs of CS. The strategy of these systems is to treat idiosyncrasies as pieces of information that only emerge at the interfaces, associated to a postsyntactic lexicon. Taking into account that Minimalism is in principle neutral between a lexicalist and a constructionist account, we move now to the discussion of the main traits of morphological theories stated within a broad Minimalist framework. Section . is dedicated to showing how lexical restrictions and morphological operations are reduced in favour of explanations based on the interfaces. Section . presents the different ways in which the morphophonological spell-out restricts the possible outputs produced by CS. Section . considers the role of inflection in Minimalist systems, and §. discusses the nature of linguistic features.

. R  CS :    

.................................................................................................................................. Let us discuss now how recent morphological research has attempted to reduce the number of independent principles and restrictions that constrain the combination of units at CS. This has been reflected in two ways: (a) deriving the properties of complex words from Merge and the interaction with the external systems, rather than postulating them as an effect of lexical restrictions, and (b) deriving the properties from independent factors, or simply removing some conditions that had been interpreted as morphological principles.

.. Deriving lexical restrictions from Merge At a minimum, a morphological theory has to explain how a particular kind of constituent that we call ‘word’ is related to other constituents that share part of their morphophonology. Independently of whether a morpheme-based (e.g. Sapir ) or a word-based (e.g. Anderson ) theory is assumed, some words are derived from others by the addition of features and properties. The distinction between the two theories is, ultimately, whether those extra features are bundled as a unit at some level of analysis or not, but there must be a way to add features to an existing constituent. This presupposes some form of Merge, understood as an operation that builds a more complex constituent through the combination of smaller elements. Thus, Merge is necessary, and a computational system that performs Merge is necessary as well, independently of whether that CS is a distinct component or is essentially the same one used by syntax (see Epstein and Seely  for a syntactic instantiation of these ideas). The question is, therefore, whether there are other principles or conditions internal to CS that also play a role in restricting word formation. The traditional answer has been

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

affirmative, and, as we saw in §.., it is retained in some minimalist approaches: lexical factors play a role, in the form of underived idiosyncrasies that determine the course of the syntactic derivation. These lexical factors are powerful enough to restrict the possible and impossible word formation processes, in well-known ways: for instance, a lexical item marked as a noun (N) could not form a more complex unit with a lexical item like ‑ness, that needed to combine with an adjective (A)—cf. window > *windowness. These lexical factors did not derive from any external system. They fed CS, and as such applied before the structure could interact with the so-called Interfaces, and obviously had to be independent of Merge, as their role was to restrict Merge. However, in accordance with Minimalist ideas, other theories belonging to ‘minimalism’ in a wider sense have attempted to derive as many lexical restrictions as possible from Merge.

... Deriving grammatical categories Consider first grammatical category, which plays a crucial role in delimiting the input and the output of word formation rules. One of the tenets of Distributed Morphology (Halle and Marantz ) is that lexemes are not listed in the lexicon with information about their grammatical category (Marantz ; Arad ). This system assumes that Merge combines two types of units: roots (heads without formal features) and functional heads (which are associated to formal features; cf. Putnam and Fábregas ). A root like econom- is not listed in the lexicon as a noun, an adjective, or a verb, and it only becomes one of them when, by Merge, CS attaches it to some head that contains a set of features which, collectively, are interpreted as one grammatical category, giving rise to structural configurations such as those in (), where the labels n, a, v represent sets of features associated to each of the three major lexical categories. ()

a.

nP √ econom-

b. n -y

aP √ econom-

c. a -ic

vP √ econom-

v -ize

A very similar view is adopted by Borer (a, b, ), where roots become categorized through Merge with a head—a functor—that contains the appropriate features.3 Similarly, Hale and Keyser’s (, ) theory of lexical syntax also proposes that categorization is a result of Merge, although without positing root nodes: we interpret a head X as belonging to a category C depending on the structural configuration it is involved in (see also Mateu ). Something is interpreted as a noun when it does not take a complement or a specifier (a); as a verb, if it only takes a complement (b); as a preposition—a relational head—if it takes both (c); and as an adjective, if it is related to a specifier introduced by another category which selects it (d).

3

There is a debate with respect to what features of roots are taken into account by CS. I refer the reader to Borer (: –) for a critical overview.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    ()

a.

X

b.

X X

c.

X Y

d. X

X

Y h

Y Z



h h

X

... Treating restrictions as interface conditions Of course, at this point, one question arises: how do we restrict, as previous approaches did, the possible and impossible combinations of items? If category comes as a result of Merge, it is impossible that Merge is restricted by category information. Minimalism, with its focus on interface conditions, can provide an answer: the restrictions could be placed in the CI system, as a result of world knowledge. Merge forces us to interpret window in windowness as some quality of objects, because of the element with which it has been combined, which turns qualities into kinds. We do not accept the word because our world knowledge prevents us from treating the concept expressed by window in that way, but, crucially, the form is in principle possible (as far as the Computational System is concerned), and situations where the word would be accepted are conceivable, provided that we entertain other assumptions about the world that allow us to treat the concept denoted by √ as a quality of objects. In fact, a search in Google informs us that the word windowness is already documented in such contexts, for instance as the abstract noun that refers to the role of hospital windows during the recovery of patients. This view, where the interpretation of lexical items is determined by syntactic structure is known as the Exo-Skeletal approach. Exo-Skeletal approaches, opposed to Endo-Skeletal approaches, where lexical items determine how a structure is built, are Minimalist in the sense that they attempt to explain the properties of objects through a combination of Merge and restrictions imposed by the external systems, in this case, CI.

... Deriving Aktionsart and argument structure Aspectual information has also been derived from Merge in recent work. In contrast to Pustejovsky () and similar models, where the lexicon contains a rich specification of aspect, MacDonald () or Ramchand (), among others, have treated the subcomponents of Aktionsart (such as process or state) as independent lexical atoms combined by CS to produce full verbal configurations. With respect to how argument structure is derived from Merge, we refer the reader to Baker () for the hypothesis that theta roles are assigned in specific configurations; Kratzer () for the proposal that external arguments are not introduced by lexical verbs; Harley () and Folli and Harley (, ) for the proposal that different functional heads introduce different argument structure configurations; and Pylkkänen () and Cuervo () for the treatment of indirect objects.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

.. Removing principles from CS A second aspect which illustrates the Minimalist tendency to simplify the Computational System is the reduction of the conditions that had been interpreted as principles involved in building complex words. During the course of time, there were several proposals about the rules that Merge should follow when building words, and conditions on their output. Minimalism tries to reduce these principles to the result of the interaction of independently required conditions. In this sense the recent work of Lasnik () and Freidin () about how to derive the effects of affix-hopping (Chomsky : –) from conditions on Merge—selection—and Agree—checking with other heads—falls squarely within this enterprise. Julien’s (, ) work, where the set of constraints on morpheme order and apparently idiosyncratic restrictions on word formation is derived from (syntactic) conditions on Merge and the functional hierarchy, is also relevant in this sense. Let us look more closely at two examples of morphological principles that had been proposed before Minimalism. (a) The Feature percolation mechanism (Lieber ), which states that features of an affix (generally, the head of the word) are copied to a dominating node, in order to explain why features of a whole word were determined by features of the head. (b) The No Phrase Constraint (Botha ), which prohibits a word formation rule targeted at a phrase, rather than at a word or a morpheme, in an attempt to explain why forms like *[angry child]-less—vs. childless—were impossible.

... Feature percolation is derived from Merge Feature percolation was a way to explain why the features of a complex structure coincided with those of the head or, in cases of apparent exocentricity, why the presence of another element would block percolation (Williams ). This mechanism was necessary in a representational system. In that kind of system there are (more or less rigid) structures where items are introduced at different levels of complexity, as in X-bar Theory (Chomsky , ); the structures themselves, independently of the items that are introduced in them, have some properties, such as headedness. In contrast, in a derivational system, where Merge derives the structure step by step depending on the nature of the items combined and constraints apply to the rules that produce the structure, formal relations are defined in each single Merge operation, incrementally. In a derivational system there is no need to propose a percolation mechanism: the fact that the features of a complex structure coincide with those of one of its members is a property that derives from Merge, simply because as a result of combining two constituents, one of them will project its label to the whole (Chomsky ; see Collins  for an elaboration of how Merge alone can define c-command, domination, containment, and other structural relations). Example () illustrates the situation, step by step, using a set-format and a tree representation.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    ()

a. Merge {a}, {b}

a

b

a

b



b. {{a}, {b}}

c. Labeling: {a, {{a}, {b}}}

a a

b

Thus, in a derivational system we explain the data that percolation attempted to capture without any further assumptions: by definition, a complex set is labeled after its head and counts as a projection of the head.4

... Deriving (what is left of) the No Phrase Constraint If we move now to the No Phrase Constraint, the principle has been largely rejected in most contemporary research. In all fairness, this is not due to the influence of Minimalism, but also to the existence of strong counterexamples that questioned the descriptive adequacy of the constraint—cf. the famous examples in Lieber (), [Charles and Di] syndrome, [pipe and slipper] husband, [floor of a birdcage] taste, where compounds take phrases as one of their members. However, other data suggest that the principle cannot just be rejected without further explanation: speakers accept truck driver, but not *[big truck] driver. How can these contrasts be explained? We can find an explanation in the proposal by Ackema and Neeleman (), which, as we will see throughout this chapter, has several Minimalist-compatible properties. One of them is that morphological structures are derived solely through Merge and a set of constraints imposed by the semantic and phonological interfaces, that is, without additional principles like the No Phrase Constraint that restrict Merge at CS. Ackema and Neeleman argue that Merge can produce two types of structure: morphological and syntactic (: –).5 These two outputs compete with each other. All things being equal, a syntactic output is preferred over a morphological one; morphological outputs win the competition in only two cases: if the syntactic output cannot express a particular meaning, or if it would not satisfy a formal requisite of another head, defined as an affix and therefore requiring combination with a morphological structure. Let us illustrate this. 4

As the reader will have noted, Merge alone is not enough to explain why features are inherited from the head to the complex constituent: an additional step, labeling, is required. In connection with this, Chomsky () has suggested that labeling is necessary for the external systems—in order to know how to correctly interpret constituents—but not for CS. In consequence, labeling takes place at CI/SM, not before. The result is that labeling, and with it the effects that percolation tried to capture, would ultimately be an instantiation of optimal computation, a third factor condition, and thus not part of CS. See Hornstein () for a contrasting view, and Epstein, Kitahara, and Seely () for an alternative to Merge without labeling. 5 Cf. Li () for a proposal along the same lines, where morphological structures are required when syntactic projection would produce an illegitimate structure.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

Imagine two elements are merged. In principle, their combination would result in a syntactic structure (a). However, imagine that (a) cannot be interpreted at the interfaces with a meaning X; in such a case, to allow expression of that meaning, the result of Merge would give a morphological structure (b). Imagine, as the second case, that the two elements further combine with a third head which is idiosyncratically defined as an affix (c), and as such must combine with a morphological structure. In that case, the result of Merge will also have to be morphological. ()

a.

aP a

b.

b

a b

a

c.

c b a

c [AFFIX] b

Note that the assumption in this model is that CS contains two types of heads: affixes and non-affixes, in a similar way as Distributed Morphology assumes a basic difference between roots and functional heads. This presupposes that CS differentiates types of heads given the features that they contain. See §. for more on this topic. Given these ingredients (Merge, two types of heads, and interface constraints) Ackema and Neeleman () derive a wide array of data from this idea that morphological structures emerge only if syntax fails to meet external prerequisites. Consider first English NV compounds, which are heavily restricted. Forms such as (a) are possible, but not forms like (b), where the N would correspond to the object of the verb. () a. chain-smoke, sky-dive, head-hunt, baby-sit b. *window-clean, *wall-paint, *truck-drive, *cake-eat Why? The claim is the following: morphologically merging N and V in (a) produces a meaning different from the one that it would receive as a syntactic structure: chain-smoke is not to smoke chain(s). In contrast, merging N and V in (b) gives the same interpretation as the same set in syntax: window-clean would mean the same as to clean window(s) (Ackema and Neeleman : ). The idea that emerges from this analysis is that morphology is a last resort of sorts, a device that treats the result of Merge in a special way whenever a normal syntactic result would not give the required output for the CI interface, given the elements involved in the derivation.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



Consider next the case of synthetic compounds, which is the only case in which a set [N V] is morphologically merged with N interpreted as the object. ()

truck driver

Why is this form accepted? Here a morphological structure is required because the set [truck drive] eventually merges with a head, ‑er, which has an idiosyncratic property (Ackema and Neeleman : –): it must combine with a morphological structure, so N and V can only occur in the configuration if they are combined morphologically ().

N

()

V N truck

V drive

N -er [Affix]

Why, then, is [[big truck] driver] weird? Even though Ackema and Neeleman do not address this case explicitly in their monograph, it follows from CI conditions (Peter Ackema, p.c.): the implausibility of the example derives from the difficulty of treating big truck as a conceptually relevant kind of truck that would require a special kind of driver. The prediction is that other combinations, where the compound non-head is a phrase, would be acceptable provided the phrase defines a conceptually special subclass of the entity: in fact, [[Russian vodka] glass] is found by speakers to be more acceptable. In practice, this proposal implies deconstructing the No Phrase Constraint as a principle and explaining its effects and its exceptions with a minimum of machinery: Merge and a set of conditions that choose morphological structures under severely restricted circumstances.

. S-         CS

.................................................................................................................................. Once the conditions that CS takes into account are impoverished, some other mechanism has to be found to explain why some results of Merge are not deemed acceptable in a language. Here is where the interfaces with the external systems of language play a role: as we have advanced before, they act as filters that ban some results that satisfy the requisites of the Computational System. There are several ways of filtering the output of Merge that have been explored in recent morphological work. The common assumption that these models make, as opposed to ‘orthodox’ Minimalism, is that the morphophonological representation is independent of the (morpho)syntactic structure. The latter is the result of Merge at CS: an abstract hierarchical representation that contains abstract formal features. The former is a set of exponents containing instructions to phonology, and perhaps other unpredictable properties, which spell out the abstract features organized in a syntactic representation. In this view, morphophonology and (morpho)syntax

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

are associated through a mapping mechanism: the syntactic configuration and the features it contains set a particular context that conditions which exponents will be used in the process of externalizing the structure. This mapping is more or less deterministic, with some theories allowing for a wide range of mismatches between (morpho)syntax and morphophonology and others severely restricting those mismatches, for example Nanosyntax, cf. specially Caha (). Crucially, taking the morphophonological component as independent from morphosyntax makes it possible that morphophonology imposes its own set of conditions on the output of CS. This is the road that has been followed by a considerable number of scholars in recent times to explain why, on the surface, not every morpheme can combine with every other morpheme. Basically, this can be instantiated in two ways: idiosyncratic prerequisites of individual morphological exponents or prerequisites of the operations that map the structure into a morphophonological representation.6

.. Idiosyncratic restrictions of exponents One way of blocking certain morpheme combinations without using lexical restrictions is to specify the context of insertion of the lexical exponent at PF. According to Harley and Noyer (), the exponents that lexicalize roots are associated with statements which specify the context where they can be inserted. This is the licensing environment of the exponent, and dictates the conditions that the result of Merge has to meet for that exponent to be able to spell it out; () contains some adapted examples: ()

a. arrive ! √ / Under v [–cause] ______ b. destroy ! √ / Under v [+cause] ______ c. open ! √ / Under v [+/–cause] ______

This familiar formalism states that an exponent like arrive spells out a root if the root is governed by a verbal projection that is specified as non-causative. Consequently, the reason why we cannot externalize *John arrived Mary is not that CS cannot build a causative syntactic structure, but because, when CS does, the root cannot be spelled out as arrive in that context. Similarly, destroy can only spell out the root if the verb is causative, thus forbidding *The city destroyed. As for open, it can spell out the root under either of the two verbal heads, allowing both John opened the door and The door opened. This same proposal that exponents carry licensing environments has been applied to many other cases—cf., for instance, Alexiadou and Müller’s () and Acquaviva’s () proposal about roots and class markers. Let us take a step back and consider the core of this proposal. The idea is that CS does only pay attention to properties such as whether a constituent is labeled N or V, plural or

6 There is a notion that we will not develop in this chapter and which restricts the solutions discussed here: through Phases. Chomsky () intoduces the notion of a phase domain, a formally complete chunk of structure that is autonomous in phonology and semantics. In some approaches, the phase has been used as the unit that restricts the domain where lexical idiosyncrasies can apply; however, phases per se are neutral with respect to the kinds of idiosyncratic rules and elements that can play a role once the domain is transfered. See Marantz (), Arad (), and Marvin ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



singular, etc.; it has no access to information such as whether it will be spelled out as an affix or not, or which specific exponent will be able to spell out a root. This information (idiosyncratic by nature) is stated outside of CS. As the reader has already noticed, this information is of the kind that previous morphological models stored as lexical restrictions: that verb X does not allow agents, or that piece Y is an affix. The result is, then, that in the approaches we are reviewing here, lexical restrictions are taken out of CS and stored in the external systems, especially the interface with phonology.

.. Rules of spell-out (1): linearization General mapping principles between CS and the morphophonological component might also be at play, imposing equally powerful restrictions to legitimate outputs of Merge. In this context, as rules that reinterpret the structure in CS so that it can be spelled out, the postsyntactic rules in Distributed Morphology (Siddiqi, Chapter  this volume), should be understood from a Minimalist perspective. Here we will concentrate on other approaches where spell-out somehow constrains the output of CS. Ackema and Neeleman (: chs  and ) provide many illustrations of how a small set of phonological principles can delimit word formation. Consider one. They note the following pattern: in agentive compounds formed with V and another category, the correlation is that an agentive suffix appears only if V is to the right, never the opposite. This is true if we compare Spanish and English (), but also if we compare different English words (). ()

a. abre-cartas open-letters ‘letter-opener’ b. letter open-er

()

pick-pocket, scare-crow

Their proposal is that in both cases, with and without an overt affix, CS acts the same way and combines the same categories. The difference only has to do with the restrictions on how exponents are optimally mapped to the structure. Given the structure of these compounds—remember ()—when the verb is not to the right, two conflicting requirements of morphophonology clash: those in () and (). () Linear correspondence If X is structurally external to Y, X is phonologically realized as /x/, and Y is phonologically realized as /y/ then /x/ is linearly external to /y/ () Input correspondence If an affix takes a head Y or a projection of Y as its input [to Merge], the affix is phonologically realized as /affix/ and Y is phonologically realized as /y/, then /affix/ takes /y/ as input

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

Example () states that if an affix takes a complex constituent as its complement, it will have to be external to both. Example () states that if an affix takes a category Y as its complement, it will be adjacent to the exponent that materializes that category. Now consider how, when the verb is to the left, materializing ‑er violates one of the two principles. Example (a), with the affix attached to the verb, violates (), because it is not external to both V and N, which form its complement; (b), on the other hand, violates (), because the affix is not spelled out next to the exponent materializing the category it combines with (V). Consequently, morphophonology chooses not to spell out the affix. ()

a. *[pick[-er] pocket] b. *[pick pocket]-er

On the other hand, if the verb is to the right, both principles are fulfilled by the order in (): the affix is external to both V and N and, at the same time, immediately adjacent to V. ()

[[letter open]-er]

.. Rules of spell-out (2): Phrasal spell-out and its consequences Let us now briefly consider Nanosyntax, a theory that has some Minimalist properties, but that departs from some core Minimalist tenets at the same time. In the examples we have seen up to now, spell-out decided what items could appear in a configuration, but it did not have the power to make a syntactic structure unacceptable. Nanosyntax is a theory where rules of spell-out can make the output of Merge ungrammatical. In this theory (Fábregas ; Lundquist ; Caha ; Starke ; Svenonius et al. ; Pantcheva ; Dékány ) the lexicon does not contain fully inflected words (as in Freidin ) or matrixes of features (as in Marantz ), but individual, unbundled, features. ()

E D C B A The role of the computational system is precisely to build hierarchical structures with these individual features contained in the lexicon. The connection with Minimalist accounts

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



comes from the central role that spell-out (thus, the PF branch) is given in determining whether a structure like () will be possible in a language or not. In Nanosyntax, spell-out is guided fundamentally by two principles: (a) The Exhaustive Lexicalisation Principle, which states that every syntactic feature must be spelled out by an exponent, and (b) Phrasal Spell-out, which allows exponents to spell out non-terminal nodes (that is, whole phrases). Just like other current theories, Nanosyntax assumes that at the interface with the external systems, speakers access a lexicon that contains exponents carrying phonological and (conceptual) semantic information. In contrast with the theories we have seen up to now, the crucial difference is that the exponents are associated with syntactic trees. This is an immediate effect of the proposal that the atoms of syntax are individual features (); when a syntactic output is produced by CS (), the lexicon is checked to see if an exponent corresponding to that structure, ignoring traces, is listed in that particular language. If so, that exponent is selected and spells out the whole constituent (). The crucial consequence, then, is that syntactic primitives are submorphemic units, because morphemes (always or almost always) will correspond to XPs.7 ()

a. /pah/



XP

X

b. /klu:p/



Y

ZP YP

Z Z

()

()

ZP YP

Z Z

/klu:p/

ZP YP

M

M

Z Z

M

Notice that this approach makes it unnecessary to stipulate the context of insertion of an exponent (à la Harley and Noyer , §..), because the syntactic configuration is in fact the minimal information that is used to introduce an exponent. Now, imagine that a structure like () is produced by CS in a particular language, or language variety, whose lexicon lacks a lexical entry like (b): it will not be able to lexicalize that structure. This would violate the Exhaustive Lexicalisation Principle, because some of the syntactic features would not be identified by lexical insertion, so the tree in () would be ungrammatical. The lack of a lexical form has filtered out at PF a syntactic construction that CS considered well formed. This mechanism has been used in Starke () to suggest that (at least some type of) movement can be forced by lexical insertion. Imagine that the language lacks (b) but does have an entry like (), without the specifier YP.

7 Note that the exponents introduced here are not meant to correspond to any specific lexical item of an attested language, and are used simply as illustrations of the type of information that will be paired to the syntactic structures present.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

()

/takatan/ Z

↔ ZP M

An output like () would not be spelled out in the language, with the result that () would be ungrammatical. However, as traces are ignored in spell-out, if in CS the specifier YP of () moves to a higher projection, evacuating the constituent ZP, the resulting configuration will be lexicalizable by /takatan/, because the trace of YP is ignored. The result is that in a language with that lexicon, YP will be forced to move out of ZP, or else the output will not be spelled out. In other words: spell-out might force (or prevent) movement of a constituent. The consequence is that movement would not have to be restricted in CS, a welcome result given that in Minimalism movement is a type of Merge—internal Merge—and Merge is expected to act as freely as possible (Gärtner ; Chomsky b, ). Movement would, therefore, be freely available in every derivation, but spell-out would restrict it simply because in some cases the language will lack an exponent able to lexicalize the resulting configuration. Again we would see an instance where (a) possible outputs of CS are filtered by PF and (b) variation would be explained at the interfaces, rather than as an effect of differences in CS. However, Nanosyntax does not follow all Minimalist tenets. For one, the status of Phases (cf. footnote ) is unclear in this system, as sometimes Phrasal Spell-out seemingly applies over nodes that in a standard Minimalist account would be separated by a phase-defining head (such as v). Secondly, Nanosyntax does not clearly pursue the hypothesis that CS is essentially empty. In some accounts (Caha ; Starke ) CS imposes strict conditions on the order in which individual features can be bundled in the syntax: the sequence of features would be restricted by a richly specified Functional Hierarchy that restricts which head can take which head as a complement (in a way reminiscent of how strict selectional restrictions behave in Cartographic approaches; cf. Cinque ). Other nanosyntactic accounts (Ramchand and Svenonius ), however, posit a more flexible Functional Hierarchy, where the order of its elements is only specified to the extent that they correspond to cognitive categories which show clear subordination relations (e.g. propositions that contain situations that contain events imposing a strict ordering of the areas C, T, and V in the clause structure).

. I:    CS

.................................................................................................................................. We have already mentioned that for some authors, the operations that CS can perform are Merge and Agree, while others claim that Merge is the sole operation allowed in CS. For this second group of authors, Agree is an operation that takes place on the interface with phonology. The crucial idea is that some agree operations are triggered by features whose value is idiosyncratic for specific lexical items, and as such they presuppose some kind of lexical restriction. Consider, for instance, ‘noun–adjective’ agreement in a language that morphologically marks gender, such as Spanish ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    ()



a. est-a chic-a guap-a this- girl- pretty- b. est-e chic-o guap-o this- boy- pretty-

In all likelihood, gender in a noun has to be stipulated lexically; at least, we still lack a predictive theory of gender assignment. If this is so, gender has to be listed outside CS, because CS is free from lexical restrictions. This means, following the reasoning, that we cannot know during Merge which gender value a noun will get. As agreement involves copying the gender value of the noun into the adjective, this means for some authors that agreement cannot take place in CS: it must take place at the interface with phonology, where lexical restrictions are accessed, or later. There are two immediate consequences of this approach. The first one is that Agree is an interface phenomenon, and CS becomes further impoverished. The second one is that no Merge operation will be ungrammatical because of a failure to agree in formal features. Several authors, working inside a Minimalist framework, have made this kind of proposal when studying inflectional phenomena. Marantz () applied it to case marking, which is also subject to lexical restrictions—for example which case a preposition assigns to a DP, or whether a verb assigns dative, locative, or accusative to its internal argument; Sigurdsson (, , ), Bobaljik (), and Preminger () have recently provided preliminary evidence for this approach in the case of adjective–noun agreement, verb–argument agreement, and other formal processes with a morphological effect. This work also connects with recent proposals that the morphophonological component is able to assign a default value to any feature that cannot copy its value from another constituent (cf. Marušič, Nevins, and Badecker ); according to these more moderate approaches, failing to agree can be fixed at the interfaces. Before we conclude this section, a caveat is in order. Here we have only presented approaches where morphological agreement is treated as a phonological phenomenon, but the operation Agree is wider than morphological agreement: in syntactic work, it also includes cases of ‘abstract’ agreement which are not reflected morphophonologically. Therefore, the claim that Agree is not an operation of CS has wider-ranging consequences. But this takes us to §..

. F:       ?

.................................................................................................................................. As we have seen, CS in a Minimalist program of research tends to be as empty as possible, with most principles operating at the interface with external systems. However, next to Merge, CS must in principle have something else: features which characterize the units that are combined by Merge. Note that this is an inherent property of many of the theories we have reviewed: Neo-constructionist theories rely on features to explain differences between the units combined, and other proposals use some kind of formal feature to determine what kind of output Merge has in CS (remember the distinction between affixes and non-affixes

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

in Ackema and Neeleman ). From a Minimalist perspective, though, we must ask ourselves the following question: are features part of CS, and therefore, are they specified by UG? There are two options: (a) CS is free of features and (b) features play a role in CS.

.. A feature-free CS The first option is currently associated with the recent work of authors who argue in favour of wild-Merge, such as Di Sciullo and Isac () and, especially, Boeckx (). The idea in this view is that all units that undergo Merge are indistinguishable from each other as far as CS is concerned: the only property they have is that they can be combined with each other into more complex structures, something instantiated as an Edge Feature (Chomsky b) which, as Samuels (: ) points out, should actually be considered a property rather than a formal feature. This allows for Merge to apply, in principle, to any two objects, with differences between the objects emerging only at the interfaces, where other distinctions would be taken into account (such as grammatical category, agreement, case-marking, etc.). This approach ultimately has two consequences that are relevant for word formation, but that, as far as we know, have not yet been fully pursued in published work: first, all formal relations between heads (not only agreement, as in Bobaljik ) would be dealt with in the external systems. Secondly, variation, both internal to language and across languages, would lie only at the interfaces, where equally acceptable outputs of Merge would be treated differently depending on the features that each variety takes into consideration at the interfaces with phonology and the system of thought. The second option, that Merge is sensitive to formal features and thus that features are part of CS, can be interpreted in two ways.

.. Features as part of CS One first option is that formal features are fully specified by UG. In this view, all features relevant to morphology (category, case, inflectional properties, etc.) that human languages seem to be sensitive to would be specified by UG and languages would vary because each language would select from that set of features—for example in some languages nouns would not carry gender, in others they would. To the best of our knowledge, no current morphological theory directly advocates for this kind of view, which has been instantiated in phonology in the recent work of Hale and Reiss (). It is, however, compatible with most available feature decompositions. One variation of this idea is that CS contains features, but all or most of them are not specified by UG, but by the external systems. The system of thought could give content to the features that CS uses as a criterion to combine units through Merge, and, perhaps more importantly, specify the relation that one feature holds with respect to the others. Notions like event, state, plural, dual, speaker, past, realis, or irrealis would then be part of language because they correspond to concepts that are manipulated by the system of thought. CS takes them into account for its processes, but they are drawn out of a set which is not specified by the language capacity, but by an external system. The system of thought would also partially determine how features relate to each other, given how concepts relate to each

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



other. Although this view is not specifically argued for in these works, feature geometries such as those proposed by Harley and Ritter () for the features of pronouns or by Cowper () for the ingredients of verb inflection8 would be compatible with this view. Example () presents Harley and Ritter’s feature geometry.

Referring expressions

()

Participant Speaker

Addressee

Individuation Group Minimal

Class

Augmented

Animate

Feminine

Inanimate

Masculine

However, there is one limit to how far this approach can go: why some concepts that seem to be relevant for the system of thought are not turned into formal features relevant to CS. Conceivable conceptually grounded notions such as [edible], [dangerous], [sick], or [dead] are not grammaticalized as features, as far as we understand the data, in any language. This means that UG must play some role in choosing the features, even if it is driven by the system of thought: in the same way that only some sounds are interpreted as linguistic sounds by the child, only some concepts are interpreted as linguistic. At the very least, UG must provide speakers with a template to know what a possible feature can be, and what it cannot. Another crucial complication is the distinction between interpretable and uninterpretable features,9 that is, features that can be directly interpreted by the external systems and those that cannot and must be associated somehow to a compatible interpretable feature through Agree. If Agree, in a wide sense, takes place in CS, uninterpretable features must be part of CS. In the case of the uninterpretable version of a feature that otherwise corresponds to a concept, the problem is not so big: if a feature like [group] or [plural] is grounded in the system of thought, we could think that its uninterpretable version—call it [ugroup]—is an instruction to CS that a head carrying it has to be matched to a head that has [group] and the two heads must be interpreted by the system of thought as connected. The problem is that some features might only exist in a non-interpretable version: this is the case of the EPP feature (Chomsky ), which forces a head to have a specifier, and in some accounts, of nominative or accusative case. These features are not interpreted as any concept: what does it mean to say ‘requiring a specifier’ or ‘getting marked as the subject of a sentence’ outside a linguistic structure? Either they have to be postulated as features specified by UG 8 However, in Cowper (: ) one of the features used has purely syntactic content and is not grounded in the system of thought: finiteness, which determines whether the sentence can license the case of a subject. This would mean that at least a small subset of features would be imposed by UG. 9 Some authors, like Chomsky () and Pesetsky and Torrego (), consider that next to the interpretable/uninterpretable divide, a second distinction is necessary between valued and unvalued features. See those references for further information.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  ´ 

or they have to be reanalysed as something else. Indeed, there are proposals in this sense: Pesetsky and Torrego (), for instance, treat case as the uninterpretable version of tense, which is another way to reduce the problem. In the case of EPP, Svenonius () suggests that the requisite that inflection must have a specifier might be related to establishing a bipartite structure in sentences that is conceptually interpreted as a predicational relationship, in which case EPP would be interpretable. However, most researchers still admit that some purely uninterpretable features exist. Many authors consider that uninterpretable features, triggering Agree, are necessary to express long-distance relationships in structures resulting from Merge (see Adger and Svenonius ). Most proposals, then, admit that UG must specify at least part of the system of features, and include Agree as one operation that CS must be able to perform.

. S  

.................................................................................................................................. In these few pages, I have barely scratched the surface of what kinds of properties a Minimalist theory of morphology could have, and I have shown how part of the work in morphological theory that is being developed now has crucial connections with the basic ideas of Minimalism. I have concentrated on some crucial issues having to do with principles of Merge, restrictions on Merge, and the formal specification of units, and observed some tendencies, but, overall, I have also revealed some deep disagreements in the field. Perhaps the most obvious observation that can be drawn from this chapter is that we still lack a Minimalist theory of morphology, and that, in principle, that theory would be compatible with both a lexicalist and a constructionist model provided that the crucial principles inherent to the proposal are derived from ‘bare’ properties of structures and their interaction with the external systems. And luckily for those of us who work in the field, there is still a lot of work to be done in this direction.

A I am grateful to the editors, Terje Lohndal, Peter Ackema, Daniel Siddiqi, Ángel Gallego, and an anonymous reviewer for comments on earlier versions of this work. All disclaimers apply.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

     ......................................................................................................................

 . 

. I

.................................................................................................................................. A  concern of Optimality Theory from its beginnings (McCarthy and Prince a; Prince and Smolensky /) has been to provide a coherent framework for formalizing the traditional problems prosodic morphology poses for morphological analysis. In this chapter, I first introduce the range of constructions typically analysed under the rubric of prosodic morphology and the issues they raise for the ‘concatenative ideal’ of word formation (Bye and Svenonius ). In the second section, I show how core principles of Optimality Theory (OT) address these issues. I conclude with an evaluation which summarizes both contributions of the OT approach to the analysis of prosodic morphology as well as some unresolved issues.

. M     

.................................................................................................................................. Many current theories of morphology adopt some version of what Hockett (b) terms the Item-and-Arrangement approach to morphology. (See Lieber, Chapter , and Stewart, Chapter  this volume, for detailed discussion.) In this sort of approach, morphemes are defined as ‘items’, identified by applying the technique implied by the traditional definition of the morpheme, namely: it is a linguistic unit with a more or less constant form (pronunciation) correlating with a more or less constant meaning or grammatical function

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

(Bauer : ).1 The ‘more or less constant form’ is generally understood to refer to a string of segmental phonemes (though subsegmental elements like tone, stress, or segmental features have also been shown to function as morphemes (Akinlabi ; Bauer )). Words are formed by arranging morphemes in a concatenative fashion: that is, affixes are added to a Base in the same way as adding beads to a string. For example, the English word unexpected may be divided into three morphemes—un, expect, ed—consisting of segment strings that occur with similar meaning or function in other English words: unforgiven, unjustifiable, expectation, expects, painted, decided, etc. The affix un is consistently a prefix to its Base and ed is consistently a suffix. As work since at least Hockett (b) observes, several types of morphological construction do not neatly fit this model, because they violate what Bye and Svenonius (: ) term the ‘concatenative ideal’, defined more precisely as follows: () The concatenative ideal (Bye and Svenonius : –) a. Proper precedence Morphemes are linearly ordered (i.e. no overlapping). b. Contiguity Morphemes are contiguous (i.e. no discontinuity). c. Additivity Morphemes are additive (i.e. no subtraction). d. Morpheme preservation Morphemes are preserved when additional morphemes are added to them (i.e. no overwriting). e. Segmental autonomy The segmental content of a morpheme is context-free (i.e. morphemes should not have segmental content determined by the lexical entry of another morpheme). f. Disjointness Morphemes are disjoint from each other (i.e. no haplology). This chapter will show how the design features of Optimality Theory account for nonconcatenative morphology by presenting analyses of four constructions: reduplication, nicknames and other templatic truncations, root-and-pattern morphology, and infixation. Reduplication is illustrated by comparing the plain and reduplicated forms of verb stems in SiSwati (a Bantu language spoken in Swaziland), given in ().2 The repetitive morpheme is formed iconically, by repeating part of the base verb stem, thus violating the principle of segmental autonomy (e). (The accent in the Swati data indicates High tone; Low tone is unmarked.) As a result, the segmental content of the repetitive morpheme (underlined) is 1

This is the Structuralist definition, developed in work like Harris (), Hockett (a), and Bloomfield (), and, as we can see, still commonly cited in textbooks. See work like Selkirk (), Di Sciullo and Williams (), Lieber (), Trommer (), Bermúdez-Otero (), Bye and Svenonius () for modern implementations of the Item-and-Arrangement approach. See work like Anderson (), Aronoff (), Stonham (), Spencer (), Bybee (), and Booij (a) for alternative approaches as well as thoughtful critiques of the continuing influence of the Item-andArrangement model on morphological theories. 2 See Downing and Inkelas () and Inkelas and Downing () for a recent survey of typological and analytical issues in reduplication.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



different with every Base. However, it does have a constant size and relation to the Base: it reduplicates just the first two syllables, no matter how long the base stem is: () SiSwati verbal reduplication (Downing ; stems are preceded by si-ya- ‘we are’) Verb stem Gloss of stem Repetitive construction si-ya-bóna see si-ya-boná-bona si-ya-khulúma talk si-ya-khulu-khulúma si-ya-bonísana show each other si-ya-boni-bonísana si-ya-khulumísana talk to each other si-ya-khulu-khulumísana Cross-linguistically, it is typical for partial reduplications to be exactly either one syllable or two syllables in size (Moravcsik ; McCarthy and Prince ). We review in §.. a variety of proposals to account for this generalization in the OT framework. Nicknames and abbreviations are another common type of morphological construction that violates the concatenative ideal. Like reduplications, they violate segmental autonomy, as their segmental content is dependent on another morpheme. They also violate additivity (c), as they typically involve truncation of a related base. And like reduplications, they are commonly subject to size restrictions. For example, Itô () shows that Japanese ‘free truncations’ are minimally disyllabic. Examples are given below; the portion of the full word omitted in the truncation is in parentheses: () Japanese disyllabic loanword truncations (Itô : ) suto(raiki) ‘strike’ ope(reeshoN) ‘operation’ ado(resu) ‘address’ poji(chibu) ‘positive’ ama(chua) ‘amateur’ hazu(baNdo) ‘husband’ nega(chibu) ‘negative’ roke(eshoN) ‘location’ ita(rikku) ‘italic’ piri(odo) ‘period’ maiku(rohoN) ‘microphone’ (*mai) saike(derikku) ‘psychedelic’ (*sai) saNdo(ichi) ‘sandwich’ (*saN) The root-and-pattern morphology found in Semitic languages like Arabic and Modern Hebrew further exemplify this two-syllable syndrome (e.g. McCarthy ; McCarthy and Prince , a, b; Ussishkin ). The Modern Hebrew binyanim for the verb ‘grow’ (rd masc. sg. forms) illustrate this: () Modern Hebrew (Ussishkin : , figure ()) Binyan Hebrew verb Gloss paʕal gadal ‘he grew’ (intransitive) piʕel gidel ‘he raised’ puʕal gudal ‘he was raised’ hifʕil higdil ‘he enlarged’ hufʕal hugdal ‘he was enlarged’ Semitic root-and-pattern morphology raises the further problem that the morphemes violate proper precedence (a) and contiguity (b). Following McCarthy’s () influential

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

analysis, in Semitic verbs like those in () the root consonants g d l contribute the lexical meaning ‘grow’; the vowels contribute inflectional information about tense; while the arrangement of the consonants and vowels into disyllabic patterns of varying internal composition (along with affixes, in some cases), modify the root’s basic meaning. Infixation, another type of prosodic morphological construction which violates proper precedence (a) and contiguity (b) is not uncommon cross-linguistically, as Yu’s () thorough study shows, and it often combines with reduplication. Infixing reduplication is illustrated by the Samoan plural formation process in (): () Samoan plural infixing reduplication (Yu : ) tóa ‘brave’ totóa má: ‘ashamed’ mamá: alófa ‘love’ a:lolófa galúe ‘work’ ga:lulúe atamái ‘clever’ atamamái (The accent in the Samoan data indicates stress; vowel length alternations are related to metrical structure.) As we can see, the reduplicative morpheme (underlined) occurs within the stem, immediately preceding the stressed vowel. It is not adjoined to the stem. To sum up, prosodic morphological constructions are a challenge for morphological analysis, as they do not conform to the principles of the ‘concatenative ideal’ defined in (). Prosodic morphemes like reduplications or templatic truncations do not have a fixed segmental content independent of their base, but rather a fixed shape. Truncations, furthermore, involve subtraction from a base. In other types of prosodic morphology— root-and-pattern morphology and infixation—the morphemes are intercalated with their base. In this chapter we shall see how OT provides a framework which allows us to formalize both the axioms of the concatenative ideal and the deviations from it which characterize prosodic morphology.3

. O  ‘ ’    

..................................................................................................................................

.. Constraint types in OT We begin our discussion of the OT approach to prosodic morphology by briefly reviewing the three main constraint types: F, M, and A (McCarthy and Prince a, b, a, b, a, ). 3

It is assumed that the reader has a grasp of the basics of Optimality Theory (OT), so that the formalism adopted for the analyses can be followed from the discussion provided. Readers wishing more of an introduction to OT can consult one of the several good introductory works available, such as Archangeli and Langendoen (), Kager (), McCarthy () or, most compactly, Archangeli ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



F constraints establish a correspondence relationship—that is, segmental dependence—between two morphologically related strings and evaluate their similarity to each other. As Kager () observes, F constraints relating input and output realizations of the same morphological material formalize the generalization that it is optimal to preserve lexical (input) contrasts by penalizing the insertion or deletion of structure in the output. However, if F were always respected, then phonological alternation would not be possible. A fundamental claim of OT is that the motivation for phonological alternations (i.e. violations of F) comes from constraints optimizing unmarked structure. M constraints are intended to formalize well-supported typological generalizations, such as—taking Kager’s () classic example—that it is marked for obstruents to be voiced in coda position: *V-C (Kager : ). Ranking a M constraint above the relevant F constraint optimizes phonological alternations. For example, final devoicing is optimal in languages like Dutch, German, Polish, Turkish, etc., if *V-C is ranked above I-IO(). ()

Tableau illustrating the constraint ranking optimizing Final Devoicing (Kager : ) I: /bed/ a. bed

*V-C

I-IO()

*!

☞ b. bet

*!

M >> F: optimizes alternation OT proposes that language variation and change are principled and expressible in factorial typologies defined through re-ranking the same (sets of ) constraints. The opposite ranking of *V-C and I-IO() accounts for the lack of Final Devoicing in other Germanic languages, like English and Swedish. This shown in the tableau in (): ()

Tableau illustrating the constraint ranking penalizing Final Devoicing (Kager : ) I: /bed/

I-IO()

☞a. bed b. bet

*V-C *!

*!

F >> M: penalizes alternation In the logic of OT, other cross-linguistic generalizations, like the size restrictions typical of prosodic morphology, should also be expressed as M constraints. This is discussed in §... A constraints (McCarthy and Prince b) define relationships between constituents of phonological or morphological structure, and require that constituent edges coincide. A constraints have the form: A(E, X; E, Y), where E can be a Left or Right constituent edge; X and Y can be any prosodic constituent (e.g. syllable, foot, prosodic word, etc.) or any morphological constituent (e.g. a particular morpheme, like the plural, or a morpheme type, such as root or stem). For example, the generalization that, in many languages, syllabification does not cross word boundaries can

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

be formalized by an alignment constraint requiring the left edge of every word to coincide with the left edge of some syllable: AL(W, σ). Since grammatical constituents can be arguments in alignment constraints, one can see the potential of these constraints to define morphological position, by aligning a morpheme edge with the edge of a morphological or prosodic base.4 For example, the suffixing position of the plural morpheme in English could be formalized: A(L, ; R, N), for example, align the left edge of the plural morpheme with the right edge of a noun (adapted Russell : ).

.. Segmental dependence ... Segmental dependence as correspondence Prosodic morphology constructions like reduplication and truncation violate the concatenative ideal because these morpheme types do not have a fixed, autonomous segmental form. Rather, their segmental content is determined by their bases. This dependent relationship can be straightforwardly modelled in OT in terms of F constraints, the type of constraint which establishes a correspondence between related morphemes and optimizes formal identity between the corresponding strings. In what has come to be termed BRCT5 OT (i.e. work in the tradition of McCarthy and Prince a, a, b, a, b), not only inputs and outputs but also reduplicative morphemes and their bases are in correspondence, and this dependent relationship is evaluated by reduplication-specific B-R F constraints which parallel in every way I–O F constraints. Work since Benua () has extended this approach to truncations, proposing that B-T F constraints optimize identity of a truncation with its base. The OT analysis of the first reduplication example in () si-ya-bona-bona ‘we are seeing now and again’ in the tableau in () illustrates the role of correspondence constraints in motivating the process of reduplication. Note that I-BR violations count syllables in the Base which do not occur in RED and vice versa; if I-BR is not outranked by any M constraints, then total reduplication is optimal:6 ()

I: -RED-bona I-IO I-BR ☞a. -bona-bona b. -ø-bona

*!*

c. -bo-bona

*!

d. -bo-bo e. -yiyi-bona

4

*!* *!***

This potential receives particularly thoughtful discussion in early OT papers such as Golston () and Russell (). 5 BRCT stands for Base Reduplicant Correspondence Theory and is used as a shorthand for approaches within OT which adopt construction-specific correspondence constraints. 6 The analysis here is incomplete, as we are setting aside the question of how the partial reduplication pattern found in (2)—and the analogous partial identity characteristic of truncation—are optimized. The formalization of fixed reduplicative shape is the topic of §... Truncation is the topic of §....

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



As we can see, total reduplication (candidate (a)) is optimal because it satisfies I-BR: the base and the reduplicative string are identical. Not reduplicating any material (candidate (b)) or not reduplicating the entire string (candidate (c)) or inserting material with no segmental correspondence to the Base (candidate (e)) violates I-BR. Truncating the base to match a partial reduplicative (candidate (d)) is non-optimal as it violates I-IO.7 It is important to note that the label ‘RED’ in the tableau is a phonologically and morphologically null diacritic, included in the input by OT convention as a shortcut means of both introducing a reduplication grammar and indicating the approximate position of the reduplicative output. As Spencer () has observed, eliminating an input representation for RED and characterizing its segmental realization in terms of constraint interaction means that OT takes an a-morphous approach to some morphological constructions, in the spirit of work like Anderson (), and others cited in footnote . In the next section, we shall see that the fixed shapes which are characteristic of prosodic morphemes are also defined in BRCT OT purely in terms of constraint interaction rather than as lexical items.

... Segmental dependence without construction-specific correspondence Before taking up the question of how fixed shapes are defined, though, it is important to note that there are alternative approaches within OT to account for the segmental dependence between reduplicants (and truncations) and their bases. In these approaches, the only correspondence constraints are those relating Input and Output. Construction-specific correspondence constraints, like Base-RED correspondence, are rejected as redundant. The most well developed of these approaches to reduplication is Morphological Doubling Theory (MDT; Inkelas and Zoll ).8 The basic idea behind MDT is that reduplication involves ‘self-compounding’: that is, doubling of the semantic features of morphemes, not their phonological form. If the two halves of the compound structure realize phonologically identical expressions of the same semantic features, then total reduplication necessarily results. To illustrate with the example in (), verb stem reduplication in Swati would be analysed as Stem self-compounding: [S [Si] [Si]]. If both Stems in the selfcompound realize the morpheme ‑bona ‘see’, then, all things being equal, the two stems will be phonologically identical, yielding total reduplication. All is not always equal, and Inkelas and Zoll’s work shows that an important advantage of MDT is to be able to provide an elegant account of forms of reduplication where the two 7

Note that I-IO is not violated by outputs containing reduplicative content, even though material occurs in the reduplicative output which is not found in the input. Instead, segments in the reduplicative string violate a low-ranked I constraint: No element of the input has multiple correspondents in the output (McCarthy and Prince a; Pulleyblank ). Reduplicated segments are considered to be multiple correspondents of input elements, violating I, rather than inserted segments that violate I-IO. 8 See Inkelas (, , ), Hyman, Inkelas, and Sibanda (), and Inkelas and Zoll () for detailed analyses of reduplication in MDT proper, and see Pulleyblank () and Yip () for interesting variants on the morphological doubling approach to total reduplication. Saba Kirchner () motivates a morpho-syntactic copying approach to total reduplication which is very much in the spirit of morphological doubling. These approaches differ in important details, and in particular they do not all share the same approach to partial reduplication, as we shall see in §....

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

halves of the reduplicative compound are not identical for morphological reasons. For example, in Sye (an Erromangan language), most verb roots have two morphologically conditioned alternants: Root and Root. In some reduplication constructions, both halves of the reduplicant contain the same alternant: omol-omol ‘fall all over’. In others there is obligatorily a different alternant in each of the two halves: cw-amol-omol ‘they will fall all over.’ Both of these patterns can be analysed as forms of root self-compounding in MDT: i.e. [R [R] [R]]. The phonological difference follows from the different morphological specifications in the reduplicative construction: either identical alternants of the same root define the construction—[R [R] [R]]—or different alternants of the root do—[R [R] [R]]. These kinds of reduplications would be very difficult—if not impossible—to account for in BRCT analyses, as the Base and RED are phonologically distinct even though they are morphologically related.9

.. Fixed shape as the emergence of the unmarked A cross-linguistically common source of mismatch between the Base and the reduplicative morpheme comes from partial reduplication, where RED (like other prosodic morphology constructions; see §.) must satisfy a particular minimal and maximal size restriction, typically one to two syllables. In fact, the complete pattern of Swati verb stem reduplication, in () shows that the reduplicative morpheme is maximally two syllables. Diyari (Australian) provides another example: () Diyari reduplication (McCarthy and Prince a: , figure ()) a. wíl ̪a wíl ̪a- wíl ̪a ‘woman’ b. kánku kánku-kánku ‘boy’ c. kúɭkuŋa kúɭku-kúɭkuŋa ‘to jump’ d. tjílparku tjílpa-tjílparku ‘bird sp.’ e. ŋánkan̪ t ̪i ŋánka-ŋánkan̪ t ̪i ‘catfish’ We briefly review in this section three distinct types of M constraint which have been proposed to define the fixed shapes of prosodic morphology: morpheme-specific templates (McCarthy and Prince a; Gouskova ), generalized Prosodic Hierarchybased templates (McCarthy and Prince a, b, a, b), and generalized morpheme-based templates (Urbanczyk ; McCarthy and Prince ; Downing ). What these three approaches have in common is that partial reduplication follows from what McCarthy and Prince (a, ) and Alderete et al. () term the Emergence of the Unmarked (TETU): that is, the asymmetrical occurrence of less marked structure in the RED than in the base, which work since Steriade () has shown is characteristic of reduplicative constructions. Reduction in size is simply another of the reductions in markedness that characterize reduplicative morphemes. We can illustrate RED markedness reductions with Nupe (Benue-Congo). Like many other related languages in this part of Africa (e.g. Akan (McCarthy and Prince a); Yoruba (Orie )), it forms However, see Downing (, ) for a reanalysis in BRCT terms of the Bantu cases discussed in Inkelas (and co-author)’s work where RED contains an allomorph of the Base. 9

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



gerundive nouns from verbs by partially reduplicating the Base verb. As shown in (), below, the reduplicative morpheme is always a single CV syllable, no matter how long the Base is, with a fixed high vowel, no matter what height the corresponding Base vowel is, and with a Mid tone, no matter what the tone of the corresponding Base vowel is: ()

Nupe gerundive reduplication (Smith ; Akinlabi ) a. bé ‘come’ bi-bé ‘coming’ b. kpà ‘drizzle’ kpi-kpà ‘drizzling’ c. jákpe ‘stoop’ ji-jákpe ‘stooping’ d. kúta ‘overlap’ ku-kúta ‘overlapping’

Akinlabi () and Pulleyblank () argue that Mid tone is the unmarked tone in threetone languages like Nupe, and [+high] is the unmarked vowel quality. By analysing reduplicative fixed shape as an example of TETU, OT provides an elegant way of formalizing Hyman’s (a) and Niepokuj’s () proposal that partial reduplication develops historically from total reduplication. This historical process can be cast in terms of a factorial typology. If I-BR is undominated, then total reduplication is optimal. If some M constraint(s) come(s) to outrank I-BR, reduplicative reduction is optimal. The same M constraint(s) must be ranked below IIO in order to asymmetrically allow the unmarked structure to emerge in the RED but not in other outputs. The TETU constraint ranking is thus: () I-IO >> M >> I-BR Since a TETU account of reduplicative reduction relies on Base–Reduplicant correspondence, the OT approaches which reject this type of correspondence, discussed in §..., must provide another account of partial reduplication. We take up this point in §....

... Morpheme-specific templates According to McCarthy and Prince’s influential () work, the fixed shapes of prosodic morphology should be equivalent to ‘authentic units of prosody’: syllable, foot, or prosodic word. McCarthy and Prince (a) recast this proposal in OT terms using M constraints defining the unit of prosody relevant for a particular prosodic morpheme: for example, RED=σ; RED=F. (Only reduplicative fixed shape was considered in this work.) To illustrate, the disyllabic maximality requirement on Diyari reduplication illustrated in () could be formalized by the following RED=Foot constraint: ()

RED =F: The reduplicative morpheme is minimally and maximally a disyllabic Foot. (Downing a, b)

Ranking this markedness constraint between I-IO and I-BR—the TETU ranking—optimizes partial reduplication in longer Diyari reduplications like t jílpa-t jílparku ‘bird sp.’, as shown in the tableau in () (cf. ()):

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 ()

 .  I: -RED-tjílparku I-IO RED=F I-BR ☞a. -tjílpa-tjílparku j

b. -ø-t ílparku j

j

c. -t ílparku-t ílparku

*** *!

*****

*!

*

In this tableau, we see that partial reduplication (candidate (a)) is optimal as it satisfies RED=F: the reduplicative string is minimally and maximally a disyllabic Foot. Not reduplicating any material (candidate (b)) or reduplicating the entire string (candidate (c)) are both non-optimal, as these candidates violate RED=F: neither candidate contains a reduplicative string that is exactly disyllabic. While this analysis clearly works, McCarthy and Prince (a, b, ) argue that morpheme-specific templatic constraints like RED=F are unexplanatory. They merely stipulate the fixed shape of a particular morpheme, without linking the shape to more general phonological or morphological principles of the language. Morpheme-specific templates also miss the generalization that the same small set of fixed shapes is relevant for a variety of prosodic morphological constructions in unrelated languages, as we saw in §., above. Even in Diyari, not just reduplicative morphemes but also prosodic words must meet a disyllabic minimality condition (with one exception, the conjunction ya ‘and’). But RED=F obviously has no effect in optimizing a minimality condition outside of reduplication. For reasons like these, a body of subsequent work in OT on Generalized Template Theory (GTT) has argued that morpheme-specific constraints do not provide a satisfactory account for why particular fixed shapes are unmarked. These theories propose instead that the fixed shapes of prosodic morphology follow from independent principles universally relating particular morphological constituent types to particular prosodic constituents. Two varieties of GTT are summarized in §§... and ....

... Generalized Prosodic Hierarchy-based templates The theory of generalized Prosodic Hierarchy-based templates is developed in detail in McCarthy and Prince (a, b, a, b, ), Urbanczyk (), and McCarthy (). The first principle of this theory is that a prosodic morpheme, like other morphemes, must be assigned a morphological category: ()

P M (e.g. RED) = MC (S, R, A)

The two target shapes for prosodic morphemes—one syllable or two syllables—are derivable from principles establishing a correlation between prosodic units and morphological units. These correlations are formalized in (a–c): () Generalized template constraints (McCarthy and Prince a, b; Urbanczyk ; McCarthy ) a. A  σ The phonological exponent of an affix is no larger than a syllable. b. S ! P W H Stems are mapped to Prosodic Words.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



c. H Every Prosodic Word must contain a Foot (and every Foot a syllable and every syllable a mora). A Stem–Foot correlation follows from the Prosodic Hierarchy (Nespor and Vogel ; Selkirk ), if one adopts the proposal in (b) that Stems are universally mapped to Prosodic Words. As argued in work such as Prince and Smolensky (), because Prosodic Word dominates stress Foot in the Prosodic Hierarchy, all Prosodic Words must contain an optimal stress Foot to satisfy H (c). (The Affix-monosyllable correlation constraint in (a) is, however, simply a stipulation: it does not follow from an independent theoretical principle in the way the Stem–Foot correlation does.) According to this approach, then, the same shapes are relevant to several distinct morphological constructions—reduplication, minimal words, root-and-pattern morphology, truncations—because all prosodic morphemes of the same morphological category—Stem or Affix—are subject to the same shape constraints. This is what is meant by claiming templates are ‘generalized’: they are motivated by construction-independent principles, so constructions involving morphemes of the same category should be subject to the same size restrictions. Diyari is the textbook example illustrating this approach, and we summarize McCarthy and Prince’s (a, ) influential analysis here. As noted above, Diyari words are minimally disyllabic (with one exception, the conjunction ya ‘and’). If all Prosodic Words must be parsed into stress feet to satisfy H (c) AND all stress feet are minimally binary, then all Prosodic Words in a trochaic language like Diyari must be minimally disyllabic. As we can see in (), the reduplicative prefix (in boldface), like the Prosodic Word, is minimally disyllabic. As McCarthy and Prince (a, b, ) argue, labelling the RED a Stem, so that the reduplicative construction is a Stem–Stem compound, correctly predicts its disyllabic minimal size, given the S ! PW  (b). That is, RED is subject to the same minimal size restriction as words, because it is also a Prosodic Word. Unlike other Prosodic Words, though, RED is also subject to a disyllabic maximality condition. McCarthy and Prince argue that a TETU ranking () of the foot parsing constraints, P-σ and A-F-L, defines a single stress Foot as the ‘unmarked Prosodic Word’, which emerges as optimal in reduplication: ()

M-IO >> P-σ >> A-F-L >> M-BR

P-σ is satisfied if all syllables in the string are footed. A-F-L is satisfied if only the leftmost two syllables of the string are footed, as other feet will not be aligned at the left edge of the Prosodic Word. A-F-L is routinely violated in non-reduplicative Prosodic Words of Diyari, as they can have more than two syllables and then have an alternating stress pattern. Thus it must be ranked below M-IO. However, ranking this constraint above M-BR insures that it will be optimal to include only a single Foot in the reduplicative string, rather than copying the entire Base.10 The analysis is exemplified in the tableau in ():

10

M-IO and M-BR are F constraints, like I-IO and I-BR. M constraints are satisfied if all of the segments of the Input/Base are realized in the Output/Reduplicant.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 ()

 .  Diyari footing in reduplicated forms (adapted McCarthy and Prince : ) /REDStem=ŋandawalka/

M-IO P-σ A-F-L M-BR

☞a. (ŋánda) = (ŋánda)(wàlka)

* **!

b. (ŋánda)(wàlka) = (ŋánda)(wàlka) c. (ŋánda) = (ŋánda)

**

*!*

Candidate (a), where the reduplicative Prosodic Word minimally and maximally parses a single stress Foot, is optimal. Since the reduplicative Prosodic Word contains a single Foot, A-F-L is minimally violated, and the higher ranked constraints requiring all input segments to be realized in the output (M-IO) and all syllables in the output string to be parsed (P-σ) are satisfied. The total reduplication candidate (b) is non-optimal, as it incurs an additional A-F-L violation. Candidate (c), where both the RED and Base Prosodic Word contain a single Foot, is non-optimal as it violates high-ranked M-IO. If we compare this tableau with the one in (), we can see that constraints independently required for parsing the string into metrical feet, P-σ and A-F-L, define the maximally disyllabic template, replacing the morpheme-specific constraint, RED=F. In Diyari, then, the generalized Prosodic Hierarchy-based approach works perfectly. The minimal Prosodic Word is a minimal stress Foot, namely, a syllabic trochee. The reduplicative morpheme is a Prosodic Word = stress Foot. This accounts for why it has the same minimal size as Prosodic Words. Independent evidence for the Prosodic Word status of the reduplicative morpheme in Diyari is that it is an independent stress domain, and it is vowel final, like other Prosodic Words in Diyari (McCarthy and Prince a, ). As Downing () and Ussishkin (, ) have argued, however, this approach to reduplicative and minimal word size has the drawback that it does not generalize to languages which do not have word-level stress, or where prosodic minimality constraints do not match the optimal stress Feet of the language. We can illustrate the problem in detail with Semitic root-and-pattern morphology. As Ussishkin (, ) shows, the verb stems of Modern Hebrew are required to be minimally and maximally disyllabic in most conjugations, as illustrated in (), above. We find a similar disyllabicity constraint for Classical Arabic verb conjugations, as shown by McCarthy () and McCarthy and Prince (, ). The analysis of the disyllabicity requirement on Semitic verb stem conjugations should be quite straightforward in GTT. If the stem is parsed as a Prosodic Word, it dominates a stress Foot. If the stress Foot is minimally disyllabic, the stems inherit this requirement. However, as Hayes () has argued, pressure for disyllabic minimality is most consistent with syllabic trochee stress systems, like that of Diyari, because iambic or moraic trochee systems can parse heavy (bimoraic) monosyllables as well-formed minimal Feet. Unfortunately, Ussishkin () demonstrates that the regular final stress found in Modern Hebrew is most consistent with an ‘emergent iamb’ analysis which allows for a minimal degenerate (light monosyllabic) Foot. McCarthy () and Hayes () show that Classical Arabic, like most modern Arabic dialects, has a moraic trochee stress system. It is also significant that in both Arabic and Modern Hebrew, nouns—which are not required to fit into a conjugational template—have the minimal monosyllabic size predicted by the stress systems. For all of these reasons, the disyllabicity requirement on verb

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



stems in Arabic and Modern Hebrew cannot follow from the S ! P W H (b) and H (c). In fact, Gordon’s () thorough survey demonstrates that cross-linguistically there is little evidence for a correlation between stress foot type and word minimality restrictions. Further problems for the homology come from the many examples of templatic morphology found in languages without lexical stress. SiSwati has a disyllabic constraint on reduplication, as shown in (), and Japanese truncations are subject to size restrictions, as shown in (), yet neither language has lexical stress. Furthermore, Itô () argues that since Japanese has many monomoraic words in its native vocabulary, the size restriction on truncations cannot follow from a general minimal word constraint.11

... Generalized morpheme-based templates To account for the numerous languages where prosodic size restrictions do not follow from stress Foot type, work like Downing () and Ussishkin () develops another line of thinking found in the OT literature about the correlation between morphological structure and prosodic constituents, namely that the minimal morphology–prosody correlation is between a single morpheme and a single syllable (see, e.g., McCarthy and Prince b; Russell ; Urbanczyk ): () M–S C (M-S; adapted Russell : ) Each morpheme contains exactly one syllable. The tendency for some (prosodic) morphemes to satisfy a disyllabic size requirement follows from Dresher and van der Hulst’s () proposal that there is a correlation between morphological complexity and phonological complexity. Lexical heads (Roots or Stems) require branching structure, as defined in (): ()

P B (adapted Ussishkin : ) A constituent branches iff it or its daughter contains more than one daughter.

In this morpheme-based version of GTT, as in the Prosodic Hierarchy-based version, templates are generalized. The same fixed shapes are relevant to different morphological constructions—reduplication, minimal words, stem templates, truncations—because all morphemes of the same category (Stem, Root, Affix) are subject to the same size restrictions. The approach improves on the Prosodic Hierarchy-based approach, however, by divorcing fixed shape from a non-universal prosodic property, namely, the stress foot. Furthermore, as Ussishkin (), following Itô () argues, it straightforwardly accounts for the problem of why derived words—for example Modern Hebrew verb stems in the root-and-pattern morphology illustrated in () or Japanese free truncations in ()—are often required to be minimally disyllabic, while non-derived words (e.g. Arabic nouns or nontruncated Japanese words) are not. Derived words are, by definition, minimally bimorphemic Stems and are therefore minimally disyllabic, to satisfy M-S ().

11

See Downing () and Ussishkin () for many more examples like this.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

These points are illustrated with a quick sketch of how the morpheme-based approach would account for fixed disyllabic reduplicative shape in Diyari (). For Diyari, this GTT approach must explain why a disyllabic minimality requirement holds for monomorphemes. And it needs to explain why the RED meets all the tests for Prosodic Word, not just the minimal size constraint. Downing (: ) provides the following account. Reduplicative constructions in Diyari are Root–Root compounds. It is common, crosslinguistically, for each half of a compound to be parsed into a distinct Prosodic Word (Nespor and Vogel ). Diyari roots are minimally disyllabic, because Roots (as lexical morphemes) must prosodically branch. Branching monosyllables are not found in Diyari: there are no long vowels in the language, and words must end with a vowel. Therefore, only disyllabic forms can satisfy P B (). Maximal disyllabicity for the RED is optimized by a binarity constraint (Downing : ; see Ussishkin :  for an alternative formulation): ()

B Each daughter of a constituent must be adjacent to some edge of the constituent.

The tableau in () exemplifies the Diyari analysis. Note that partial reduplication continues to be a TETU effect (see ()) in this approach, made optimal by ranking B between I-IO and M-BR. Since P B holds for both RED and the Base, it outranks both IO and BR correspondence constraints: ()

Diyari reduplication: disyllabicity without appeal to stress feet /REDRoot=ŋandawalka/

P B M- IO B M-BR

☞a. ŋánda = ŋándawàlka

**

b. ŋándawàlka = ŋándawàlka c. ŋánda = ŋánda

**

**!** *!*

As we see in this tableau, candidate (a) is optimal, as RED satisfies P B and satisfies B better than the total reduplication candidate, candidate (b). Candidate (c) is non-optimal, as truncating the Base to satisfy B violates higher-ranked M-IO. To sum up the section so far, we have seen that one innovation of BRCT OT is to formalize the fixed shape (and other reductions characteristic of REDs) through constraint ranking, the so-called TETU ranking in (). All of the approaches summarized in this section share this assumption. The two generalized template GTT approaches also dispense with morphemespecific templates to define the fixed shapes. Rather, the shapes emerge from general prosodic properties, like stress footing or branchingness and binarity constraints. In other words, the shapes are not defined in lexical entries but rather emerge as the output of constraint interaction. Of additional importance for morphologists is that both GTT approaches share the claim that the fixed shapes of prosodic morphology should correlate with the canonical (unmarked) shapes of particular morpheme categories—Affix, Root, Stem. As we have seen, Stems in many unrelated languages (Diyari, Japanese, Modern Hebrew, SiSwati) do, in fact, provide evidence for a disyllabic Stem syndrome.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



... Violation of additivity: nicknames, abbreviations, and other templatic truncations OT analyses of templatic truncation processes (such as the Japanese loanword truncations in (), above) have built on the similarities between truncation and reduplication.12 Both constructions are defined through segmental dependence on a base. As work like Alber and Arndt-Lappe (), Benua (), Downing (), and McCarthy and Prince () shows, they are typically distinguished from their bases by being subject to the same kinds of prosodically defined size restrictions as we find in reduplication. (Recall that Japanese loanword abbreviations provide another example of the disyllabicity syndrome.) And, as with reduplications, sometimes the size restrictions are tied to the stress foot and sometimes they are not. In many Romance languages, nickname truncations are anchored to the stressed syllable, as Alber and Arndt-Lappe () show. This is illustrated with an Italian truncation pattern: ()

Italian: disyllabic stress-anchored truncations (Alber and Arndt-Lappe : ) Truncation Base Césca Francésca Bérto Robérto Méni Doménico Níba Anníbale

In contrast, as Itô () has argued, the disyllabicity restriction on Japanese loanword truncations () cannot be tied to stress feet, as there is no lexical stress in that language. There is, then, lack of agreement about how best to define the size restrictors for truncations, just as it is for reduplication: do they follow from stress footing principles (Alber and Arndt-Lappe ) or do they follow from prosodic branching (Itô ; Downing )? Where we find agreement is that a TETU constraint ranking defines the fixed shapes which characterize templatic truncation (Benua ; Downing ; Alber and Arndt-Lappe ): ()

M-IO >> size restrictor constraints >> M-BT

The analysis of the disyllabicity restriction on Japanese loanword truncations or Italian disyllabic nicknames would, then, be essentially parallel to the analyses of Diyari disyllabic reduplicative RED discussed in §.., replacing M-BR constraints with M-BT. (The reader can choose which of the three types of size restrictor constraints s/he prefers.)

Only templatic truncations—that is, those that have a fixed output shape—are discussed here. See Bye and Svenonius () and Trommer () for recent OT treatments of subtractive truncations: i.e. those which do not target a fixed output shape. And see Alber and Arndt-Lappe () for a thoughtful review of OT approaches to both templatic and subtractive truncation. 12

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

... Partial reduplication in theories without Base-RED correspondence As mentioned in §..., above, not all work within OT on reduplication (or truncation) assumes construction-specific correspondence constraints. They must, then, find alternative means of optimizing size restrictions on prosodic morphology constructions. Two main alternative approaches are sketched here. In Inkelas and Zoll’s () MDT approach to reduplication, as in BRCT OT, truncation and reduplication are seen as related processes. As in mainstream OT, the fixed shape of a partial RED is defined by ranking prosodic constraints above F. However, in MDT, there are no construction-specific correspondence constraints, so another means must be found to motivate differences in RED vs. Base outputs. In MDT, all morphological constructions are associated with a co-phonology: that is, a construction-specific phonological grammar or constraint ranking.13 Recall from §..., that reduplication is defined as syntactic-semantic self-compounding in MDT. Each half of the self-compound has its own input and is associated with its own co-phonology. In the case of total reduplication, each half would have the same phonological input, and the same co-phonology would account for the outputs. In the case of partial reduplication (and other phonologically motivated deviations from total reduplication), RED is associated with a different co-phonology from the Base, and size restrictions are accounted for by a truncation co-phonology. Inkelas and Zoll’s (: ) analysis of Diyari reduplication, for example, associates a size restriction (formalized as a PWF homology) with the co-phonology of the reduplicative morpheme (RED). As usual, the size restriction is optimal when the relevant M constraint outranks the relevant F constraint (IO-F in MDT): ()

[tjilpa] > IO-Faith | j /t ilparku/

The Base co-phonology would have the opposite ranking for PWF and IO-F. Non-reduplicative truncation targeting the same disyllabic output shape—for example, Japanese loanwords and Italian nicknames discussed in §...—is accounted for by assigning the same co-phonology to a morphological Base. As Inkelas and Zoll () argue, it is an advantage of the MDT approach that the exact same co-phonology (size restrictor >> IO-F) accounts for the same size restrictions in different construction types. In BRCT OT, in contrast, while the size restrictor constraints generalize across constructions, the correspondence constraints they outrank are construction-type specific. The other main alternative approach to defining size restrictions without resort to TETU simply carries forward into OT the representation of reduplicative morphemes that was proposed in McCarthy and Prince () and adopted in subsequent pre-OT work on prosodic morphology. In this approach, partial reduplication is defined as prosodic affixation. That is, the fixed shape is a lexical item, represented in the input of the prosodic morpheme as an empty Foot or other prosodic constituent (such as an empty syllable or See Inkelas and Zoll (, ) for succinct introductions to co-phonologies as well as for comparison with other approaches, within OT, to morphologically conditioned phonology. 13

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



mora). In order for the empty prosodic constituent to be phonologically realized, it must acquire phonological content. Constraint ranking makes reduplication the optimal way to provide the empty constituent with content. Various instantiations of this basic approach within OT can be found in Golston (), Pulleyblank (), Saba Kirchner (, ), Trommer (), Bermúdez-Otero (), and Bye and Svenonius (). For the sake of comparison with the other approaches discussed in this section, the prosodic affixation approach is applied to Diyari reduplication, adopting Pulleyblank’s () variant of prosodic affixation for the sake of concreteness. The input to the reduplicative construction would contain the empty prosodic constituent, Foot (φ): e.g. /φ=ŋandawalka/. The empty constituent acquires content to satisfy highly ranked HS: A prosodic category must dominate featural specification. Duplicating the input base is the optimal way to acquire content if the F constraint I (No element of the input has multiple correspondents in the output) is low-ranked. As usual, the disyllabic minimum/ maximum is optimal if the size restrictor constraint outranks the relevant F constraint. The analysis is exemplified in the following tableau, where constraint violations count syllables: ()

Diyari reduplication as prosodic affixation (adapted Pulleyblank ) /φ = ŋandawalka/

H S D-IO F B I

☞a. ŋánda = ŋándawàlka

**

b. ŋándawàlka = ŋándawàlka

**!

c. yiyi = ŋandawalka d. ø = ŋandawalka

*!* *!

As we see in this tableau, candidate (a) is optimal, as RED satisfies all of the constraints except lowest-ranked I. Each of the other candidates violates the other constraints. Candidate (b) with total reduplication violates FB. Candidate (c), with epenthetic material filling out the empty Input Foot, violates D-IO (the constraint penalizing epenthesis). In candidate (d), RED has no phonological content, in violation of HS. While this approach can account for some cases of partial reduplication, it is unclear how total reduplication is accounted for in much of the work that assumes a prosodic affixation analysis (i.e. Golston ; Bermúdez-Otero ; Bye and Svenonius ).14 Furthermore, most of the work which adopts prosodic affixation for reduplication does not discuss other types of prosodic morphology (though see Golston  and Bye and Svenonius ). So it is unclear how this formal approach to fixed shapes in reduplication could be extended to constructions such as truncations, where a Foot size restriction is less easily formalized through affixation. Finally, it is unclear how a disyllabic size restriction would be enforced in languages which do not have disyllabic stress Feet, if it relies on affixing a Foot.15 14

Recall from §... that Pulleyblank () and Saba Kirchner () assume a morphosyntactic doubling style analysis for total reduplication. 15 Additional arguments for preferring constraint interaction to prosodic affixation as a general approach to defining the fixed shapes of reduplication and other prosodic morphology are developed at length in McCarthy and Prince (a, a, b).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

A brief critical comparison of OT approaches with and without construction-specific correspondence constraints is taken up in §.. on unresolved issues.

.. Violations of proper precedence and contiguity In this section, we turn to the OT analysis of two types of prosodic morphology, rootand-pattern morphology and infixation, which have in common that they violate the expectations concerning morpheme ordering defined in the concatenative ideal (a, b), repeated below:16 () The concatenative ideal (Bye and Svenonius : –) a. Proper precedence Morphemes are linearly ordered (i.e. no overlapping). b. Contiguity Morphemes are contiguous (i.e. no discontinuity). These violations arise because morphemes are intercalated with their bases in both of these morphological constructions, as we saw in () and (), above.

... Semitic root-and-pattern morphology Recall from the data in (), repeated at () below for convenience, that in a Modern Hebrew verb form like gadal ‘he grew’, the lexical meaning ‘grow’ is contributed by the root consonants g d l. The vowels in the different binyanim (verb conjugations) contribute derivational information as well as tense/aspect information. That is, morphemes are intercalated in verb stems, in violation of contiguity: ()

Modern Hebrew (Ussishkin : , figure ()) Binyan Hebrew verb Gloss paʕal gadal ‘he grew’ (intransitive) piʕel gidel ‘he raised’ puʕal gudal ‘he was raised’ hifʕil higdil ‘he enlarged’ hufʕal hugdal ‘he was enlarged’

OT (McCarthy and Prince a) recognizes contiguity as a ‘concatenative ideal’ and defines it as a type of IO F constraint. As we have seen, patterns of prosodic morphology are to be optimized by ranking some F constraints below other constraints, and the same holds true in this case. In Ussishkin’s (, ) analysis of Modern Hebrew binyanim, fully formed words form the input for verb conjugations. As a result, the input vowels of the basic binyan are overwritten by the vowels defining

16

See Albright () for an alternative review of OT treatments of non-concatenative morpheme position.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



the derived binyan, [, ], another violation of the concatenative ideal (d). Ranking RM above I-IO optimizes overwriting.17 ()

RM (Akinlabi ): Every input morpheme must have [the appropriate] output realization.

As usual, ranking the size restrictor constraints, P B () and B (), above F constraints optimizes the disyllabic minimality and maximality restriction on the construction. The tableau in () exemplifies the analysis: ()

Disyllabic derived binyan in Modern Hebrew (adapted Downing : ) I: P B R M C I-IO gadal, /, / B ☞ a. gidel

* *!

b. gadal c. gadal-ie

a, a i, e

*!

In the optimal candidate (a), C and I-IO are violated due to partial overwriting of the input verb form by the binyan-specific vocalism. However, candidate (b), which preserves input vocalism, is non-optimal as it violates R M, a higher-ranked constraint. Suffixing the binyan vocalism, as in candidate (c), satisfies C and I-IO but violates B, the constraint enforcing a maximally disyllabic output. In short, in Semitic root-and-pattern morphology, C is violated because this is the optimal way to realize morphologically specified vocalism in a way that respects the prosodic requirements on the output verb form.

... Infixation Infixing also violates C, as we saw in presenting the Samoan infixing reduplication pattern illustrated in (), above. Two general approaches to infixing are found in the OT literature, thoughtfully reviewed in Yu (). In one approach, infixation involves what Yu terms prosodic readjustment: that is, a prosodically motivated change in an affix’s basic position. McCarthy and Prince (a, b) analyse the well-known Austronesian ‑um- infixation pattern in this way. As we can see in the data from Tagalog and Chamorro in () below, ‑um‑ is prefixed to a vowel-initial base but infixed inside a consonantinitial one:

17

It is a matter of current controversy in Semitic morphology whether verb roots consist just of consonants, or whether there is a basic verb measure that the others are derived from, as Ussishkin () argues. It is outside the scope of this chapter to discuss this controversy in detail. The interested reader should consult work such as Aronoff () and Ussishkin () for thoughtful discussion. And see recent work like Tucker () for an OT analysis of Iraqi Arabic verb measures which assumes that input roots consist only of consonants. The differences in these two approaches are, however, not important for the discussion of contiguity here.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

() (a) Tagalog -um- infixation (McCarthy and Prince b: ) um-aral ‘teach’ s-um-ulat *um-sulat ‘write’ gr-um-adwet *um-gradwet ‘graduate’ (b) Chamorro verbalizer, actor focus (Yu : ) epanglo ‘hunt crabs’ um-epanglo ‘to look for crabs’ gupu ‘to fly’ g-um-upu i paharu ‘the bird flew’ tristi ‘sad’ trumisti ‘becomes sad’ planta ‘set the table’ plumanta ‘sets (table)’ (nom. wh-agreement form) Formally, McCarthy and Prince (b: ) propose that ‑um- is defined as a prefix—its position with vowel-initial stems—by the following morpheme-specific A constraint (recall from §.. that A  require the edges of two different constituents—grammatical and/or prosodic—to coincide): ()

A-um: Align([um]Af, L; Stem, L)

They account for infixing before consonant-initial stems by ranking this A constraint below NC (Syllables are open). This ranking optimizes infixing when the result would minimize NC violations. Infixation thus leads to a prosodically betterformed output. The analysis is exemplified in the tableau in (): ()

‑um- infixation and prefixation in Chamorro (adapted McCarthy and Prince b: ) I: um, epanglo NC A-L(um) ☞a. umepanglo *!*

b. epumanglo um, gupu ☞c. gumupu d. umgupu

*! *!

The second main approach to infixation identified by Yu () is the prosodic subcategorization approach. If we turn back to the Samoa data in (), above, we see that the reduplicative morpheme can be infixed: for example, alófa ‘love’ ! a:lolófa. The generalization that predicts the position of the infix is that it occurs immediately before the stressed syllable. This can be formally accounted for by proposing that the reduplicative morpheme subcategorizes for a prosodic constituent, a stress Foot, rather than for a morphological constituent, potentially leading to infixation.18 As work since McCarthy and Prince (b) shows, prosodic subcategorization can also be straightforwardly modelled in OT using A constraints. In Yu’s (: ) approach, infixation is accounted for by constraints 18 Earlier proposals accounting for infixing (and exfixing) in terms of affixation to a prosodic constituent can be found in Broselow and McCarthy (/), McCarthy and Prince (, b), Booij and Lieber (), Downing (a, b, ), and Inkelas and Zoll ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



aligning the left or right edge of an Affix with the left or right edge of some prosodic pivot. The A constraint in () would account for the Samoan infixing reduplication pattern; Stress foot is the prosodic pivot: () A  F constraint (adapted McCarthy and Prince b: ) Align(R, RED; L, Stress foot) The analysis is exemplified in the tableau in (); the reduplicative morpheme is in boldface: ()

Samoan infixation I: M-IO M-S A  F C M-BR REDA, alófa ☞a. a:lolófa b. lolófa

* *! *!

c. a:a:lófa d. a:lofalófa

*

*!

*

Candidate (a) is optimal, even though it violates C, as the prosodic alignment constraint, A  F, is higher-ranked. Candidates (b) and (c) satisfy C, but at the expense of violating higher-ranked constraints. Candidate (d) illustrates the role of M-S () in optimizing a monosyllabic RED. Yu () argues that a unified account of infixation can be provided by consistently analysing it as affixation to one of a small set of prosodic pivots. For example, Yu (: ) shows that the ‑um- infixation pattern found in Tagalog and Chamorro can be reanalysed as affixation to the first vowel of the root, resulting in infixation when the root does not begin with a vowel.19 The A constraint optimizing the pattern is given in (); the analysis is exemplified in (): () A  F V: Align(R, um; L, First Root Vowel) ()

Tagalog ‑um- infixation (Yu ) I: um, epanglo A  F V C ☞a. umepanglo b. epangloum

*!

um, gupu ☞c. gumupu d. umgupu

*! *!

19 See Yu () for a thorough motivation of the first root vowel as a common prosodic pivot for infixation and for detailed arguments against McCarthy and Prince’s (a) proposal that infixation always leads to a prosodically well-formed output.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

As shown by candidate (a), when the root begins with a vowel, then both C and the prosodic alignment constraint, A  F V, can be satisfied. However, when the root begins with a consonant (or consonant sequence), the infixing candidate, (c), is optimal, as it satisfies A  F V at the expense of lower-ranked C. It is important to notice in the tableaux in §... that the comma separating the affixes (e.g. um in tableau ()) from their Bases in the Input indicates that the morphemes are unordered in the morphology. That is, they do not have a morpho-syntactically determined position as prefixes or suffixes which is then manipulated by the phonology, yielding partial overwriting (in the case of root-and-pattern vocalism) or infixing. Instead, constraint interaction alone accounts for the ordering of these non-concatenative morphemes. (See Yu  for detailed discussion of this distinction.) These examples highlight that OT is non-modular in the sense that constraints (in particular, A constraints) can refer to both phonological and morphological entities, allowing both phonological and morphological principles to play a parallel role in determining the realization of morphemes. To sum up, OT analyses morphological constructions which violate the concatenative ideal of contiguity by formalizing C as a F constraint. Like other constraints, it can be outranked. In the case of root-and-pattern morphology, C is outranked by constraints requiring input morphemic vocalism to be realized in the output, within a maximally disyllabic output stem. In the case of infixation, C is outranked by prosodic alignment constraints potentially defining a morpheme-internal position as the optimal locus of affixation.

. E

.................................................................................................................................. In this chapter, we have seen that the central constraint types of OT allow the theory to formalize in a principled way the deviations from the principles of the ‘concatenative ideal’ listed in () that characterize prosodic morphology. This gave a strong impetus to research on prosodic morphology in the years following McCarthy and Prince (a), as linguists tested new proposals and possibilities made available by the theory. The §§.. and .. first highlight just a few of the proposals and then summarize some remaining controversies surrounding them.

.. Contributions of OT to the analysis of prosodic morphology One important contribution of BRCT OT is that it provided a unified approach to the forms of prosodic morphology which violate segmental autonomy, by defining segmental dependence in terms of construction-specific correspondence constraints. (See §.., above.) More interesting, perhaps, for morphologists, these specific correspondence constraints and the TETU constraint ranking provided a formal account of two important generalizations about reduplications (and truncations—although truncations have traditionally received far less attention). First, work since, at least, Steriade () observes

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



that reduplicative morphemes often have less marked structure than their bases. This asymmetric markedness relationship is straightforwardly modelled by the TETU constraint ranking: I-IO >> M >> I-BR. The TETU ranking also allows one to straightforwardly account for Hyman’s (a) and Niepokuj’s () proposed pathway of diachronic development of reduplication, from total reduplication to partial reduplication to affixal (strongly reduced) reduplication. These reductions fall out from a factorial typology in which an increasing number of M constraints come to outrank I-BR. A second contribution of BRCT OT is that it provides a new explanation for the – syllable syndrome which characterizes prosodic morphemes by claiming that prosodic templates are generalized. That is, templates are not construction-specific; rather, they should fall out from two principles. First, particular morphological categories (Affix, Root, Stem) correlate with particular prosodic constituents. For example, we saw that in many languages we find a disyllabic Stem syndrome. The templates are also generalized in the sense that they should follow from prosodic principles that hold for all morphemes or words of the language, such as foot parsing constraints or other prosodic well-formedness constraints, ranked to allow TETU: I-IO >> size restrictors >> I-BR. In BRCT OT there are, then, no prosodic templates per se. Fixed size restrictions are proposed to emerge purely from constraint interaction and the morphological category assigned to the prosodic morpheme. Concerted interest in morphological aspects of reduplication stimulated work concluding that not all segmental copying is motivated by morphological reduplication. One must also recognize copying which has a purely phonological motivation (see e.g. Orie ; Rose and Walker ; Inkelas and Zoll ; Inkelas ), as well as reduplication at the syntactic level (Saba Kirchner ). The non-modular, non-linear, non-item-based aspects of OT mentioned in this chapter, like emergent templates and prosodic alignment, define OT as a type of a-morphous approach to morphology in general. OT brought new attention to the possibilities of non-syntax-based approaches to word formation at the same time that it brought new attention to the types of prosodic morphology that OT handles best.

.. Unresolved issues Each of the contributions mentioned in the previous section has been a source of controversy in the OT literature. As already noted in presenting analyses of reduplication and truncation, the construction-specific correspondence constraints which are crucial to the TETU account of size restrictions have been rejected by a number of OT researchers. Because they repeat F-IO constraints for other specific construction types, they appear redundant (Pulleyblank ). Furthermore, correspondence is a two-way street, and so RED should also be able to influence the form of the Base. McCarthy and Prince (a) discuss a number of cases that show identity effects can flow from the RED to the Base, and subsequent work by Downing () and Gouskova () provides more examples. However, as Inkelas and Zoll () show in some detail, at least some of McCarthy and Prince’s examples can be reanalysed, leaving only a handful of strong cases, whereas the theory predicts many more. (See Inkelas  for recent discussion.) While this is a valid

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . 

point, it remains a weakness of theories which reject BRCT that they so far fail to provide an alternative account both of the TETU effects which are characteristic of reduplication and truncation and of the historical path of development from total to partial reduplication. It remains controversial whether the fixed shapes that define prosodic morphology constructions follow from general prosodic principles as GTT proposes. For example, Gouskova () shows that the CV reduplicative template in Tonkawa is morphemespecific. It does not fall out from the general phonological or morphological markedness principles of the language. And, as Inkelas and Zoll () and Downing () show, it is not always a straightforward matter to determine a RED’s morphological category, a necessary first step in predicting its prosodic shape. Further, in languages with a rich repertoire of prosodic morphology, the simple Stem, Root, Affix distinction does not provide enough information to predict the shape of a particular prosodic morpheme. We can see this by examining the Modern Hebrew binyanim in (): all the binyanim are disyllabic, but this is not enough information to define the output realization.20 In short, even though the correlations between morpheme type and prosodic shape seem plausible in a general way, the correlations are too general to have adequate predictive power. Finally, the non-modular, non-concatenative, non-derivational aspects of the OT model are not easily compatible with syntactic models of morphology. This has not gone unnoticed. As pointed out in §..., above, a number of researchers in OT cling to the pre-OT style representation of partial reduplication as the affixation of empty prosodic constituents, as this representation brings the analysis of reduplication closer to the concatenative ideal. Bye and Svenonius () represents a particularly thoughtful and ambitious attempt to develop a concatenative approach to prosodic phonology in OT. It remains to be seen, however, whether attempts to make OT more compatible with syntactic approaches to morphology will achieve the same empirical coverage of prosodic morphological phenomena that BRCT OT has.

. C 

.................................................................................................................................. Prosodic morphology has been of enduring interest to morphologists and phonologists alike. For phonologists, prosodic morphology is an important research area, because, by definition (McCarthy and Prince a), it is a domain where phonological principles outrank morphological ones in determining morpheme realization. Thus it provides a domain for testing and refining the cross-linguistic validity of prosodic principles. For morphologists, prosodic morphology is an important challenge, because, by definition, it consists of constructions which do not conform to the concatenative ideal. OT has brought a fresh perspective to the analysis of prosodic morphology by providing a formalism which allows prosodic constraints to interact with concatenative axioms in a principled way. The formalism also allows for an a-morphous, non-modular interaction of prosody and morphology in determining morpheme position and shape. OT has given a central role to

20

See Bermúdez-Otero () and Downing () for detailed discussion of the problems associated with implementing a morpheme–prosody correlation.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



morphology compared to its predecessor (McCarthy and Prince ), due to its attention to the morphological category of prosodic morphemes and correlations between morphological category and prosodic constituent type. For all these reasons, OT should continue to provoke debate about the roles of phonology and morphology in accounting for prosodic morphology for some time to come.

F  Some of the works listed in the references are highlighted here for readers interested in pursuing in more depth the issues developed in this chapter. Basic OT readings on Prosodic Morphology: McCarthy () McCarthy and Prince (, a, b, a, b, a, b, ) Reduplication: Downing () Inkelas () Inkelas and Zoll () Urbanczyk (, ) Truncation: Alber and Arndt-Lappe () Downing () Morpheme position: Albright () Root-and-Pattern morphology: Ussishkin () Infixation: Yu ()

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

        -                 -              ......................................................................................................................

    

. B

.................................................................................................................................. L-F Grammar (LFG) (Bresnan a, a; Dalrymple ; Bresnan et al. ) and Head-driven Phrase Structure Grammar (HPSG) (Pollard and Sag ; Sag et al. ; Kathol et al. ; Müller a) are both lexicalist, non-transformational, constraint-based grammatical frameworks. While they differ in many respects—some of which are detailed in this chapter—they share a number of fundamental principles relevant to morphological theory and analysis, which guide the overall architecture of the grammar. The two frameworks also share a common commitment to being fully explicit and implementable, with strong links to computational implementations. (For HPSG see Bender et al. ; Flickinger ; Müller b; inter alia. For LFG, see especially Halvorsen and Kaplan ; Dalrymple et al. ; Butt et al. ; Cahill et al. ; Crouch et al. , among many others.) First, they are both frameworks which incorporate a strong lexicalist perspective assuming the separation of syntax and morphology, such that the internal structure of words is opaque to the mechanisms of syntax (Pollard and Sag ; Müller a; Bresnan a). Syntax and morphology are distinct components of the grammar, with only the output of the morphology/lexicon relevant to the syntactic component. From this it follows that both frameworks eschew the syntacticization of inflectional (or derivational) morphological

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



processes. Inflectional morphs, therefore, are not syntactically independent (as they are in some other frameworks). Rather, they combine into fully inflected words exclusively in the morphological component and their role in the syntax is limited to the information they contribute. These two frameworks also share the property of being constraint-based—meaning that descriptions of linguistic structures essentially constrain the models of linguistic objects— and non-derivational, in the sense that the different dimensions of linguistic structure are co-present and do not stand in a derivational relationship to each other. This takes a different form in the two theories, because the overall architecture of levels looks very different, and will be made clear in more detail in the introductory sections for each framework. LFG and HPSG are essentially syntactic frameworks and as such have not traditionally assumed any particular theory of morphology. In fact, given the separation of the morphological and syntactic components, both frameworks are essentially compatible with any (strong lexicalist) theoretical view of the morphological component. However, researchers within the respective frameworks have become interested in morphological questions, primarily those that relate to the morphology–syntax interface, and so we will survey some of this work in this chapter. Given the constraints of space, we have focused on a few of the key themes that arise in morphologically related research in these two theories, but have made no attempt to be comprehensive. Different themes emerge in our discussion of each theoretical framework, often reflecting the different foci of the relevant researchers.1

.. Overview of LFG Lexical-Functional Grammar (LFG) (Bresnan a, a; Kaplan and Bresnan ; Dalrymple et al. ; Dalrymple ; Falk ; Bresnan et al. ) is a non-derivational, lexicalist, constraint-based theory with co-present parallel structures, linked by principles of correspondence. Each of the structures of LFG has a distinct formal character and models a different aspect of the structure of language. The primary syntactic structures are c-structure (constituent structure) and f-structure (functional structure). The former models precedence and phrasal dominance relations in the familiar terms of a phrase structure tree and the latter models syntactic predicate–argument relations in terms of grammatical functions. As discussed above, LFG is primarily a syntactic framework and can therefore interface with any theory of morphology that assumes the principle of lexical integrity (stated in LFG terms in ()). Thus, the c-structure of LFG takes fully inflected words as its terminal nodes, but does not impose any particular constraints on how these words have been composed in the morphological component. This flexibility has allowed

1 Bonami and Crysmann () also provides an overview of approaches to morphology within these two theoretical frameworks, and nicely complements this chapter by taking a slightly different focus (and therefore dealing with some different phenomena).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

different researchers interested in the morphology–syntax interface to respond to trends and developments in the morphological literature and adapt their preferred morphological perspective to the LFG architecture. In this chapter we will present some of the morphological questions that have been addressed by researchers working within the broader LFG framework, but it is important to remember that none of these morphological perspectives is dictated by the framework itself. Furthermore, the bulk of such work relates to the interface between the morphology and the syntax (see, e.g., Kaplan and Butt ; Sadler and Spencer ; Andrews ; Dalrymple ), rather than morphological theory proper. ()

Lexical integrity (Bresnan a: ): Morphologically complete words are leaves of the c-structure tree and each leaf corresponds to one and only one c-structure node.

Information is mapped to the f-structure from the nodes of the c-structure, including the individual words which form the terminal nodes. Formally, f-structures are finite functions from attributes to values, which may themselves be complex (i.e. f-structures), and they are conventionally represented as attribute-value matrices. Equations (known as functional (f-)descriptions) associated with lexical items and with nodes of the c-structure specify properties of f-structures: the mapping function or projection ϕ has nodes of the c-structure as its domain and f-structures as its range (the inverse ϕ1 maps f-structures to c-structures and is not a function). The notation ↑ refers to the f-structure associated with the mother of the current node (i.e. it denotes the mother’s f-structure) while ↓ refers to the f-structure of the node to which it is annotated. Feature assertions are satisfied by f-structures which contain attribute–value pairs corresponding to these assertions. Of particular importance is the smallest f-structure which satisfies a collection of constraints or feature assertions, known as the minimal model. The f-structure of an utterance is the minimal model or solution satisfying the constraints introduced by the words and phrases in the utterance. The formal correspondence between c-structure and f-structure is many-to-one: to each c-structure node there is assigned a unique (but not necessarily distinct) (minimal) f-structure. Nevertheless, individual c-structure elements, including words, may specify complex f-structures. For example, sees in (), which will associate with a single node V in c-structure, defines the f-structure shown in ().2 ()

sees, V (↑ ) = ‘ < ,  >’ (↑ ) =  (↑ ) = ↓ (↓ ) =  (↓ ) = 

In LFG lexical entries and f-structures,  is the feature for the lexically specified predicate (here ‘see’) and its subcategorized arguments. 2

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

     ()

pred



‘see ’

tense

pres

subj

pers 3 num sg

An important facet of LFG is its commitment to lexicalism. The Lexical Integrity Principle () (see also Simpson ; Bresnan and Mchombo ; Mohanan  and references therein) distinguishes the morphological (lexical) and syntactic components as being subject to different principles of composition. Words are constructed in the morphology, while c-structure and f-structure form the core of the syntactic component. This means that the input to these syntactic levels—for example the terminal elements of c-structure trees—are fully inflected words, and that syntactic processes cannot manipulate the internal morphological structure of these items. Crucially, however, this does not rule out the possibility that both morphological and syntactic constituents may contribute information to the f-structure (e.g. Simpson , ; Bresnan and Mchombo , ; Bresnan a). In the example above, we see that morphological information associated with the verb—in this case the  and  features—is unified directly with the f-structure associated with the (ect), as is the relevant information provided by the rest of the c-structure (i.e. via the subject NP). In this way, the information contributed by the morphology is integrated with the syntactic component while maintaining lexical integrity, since the nodes of the c-structure contain only morphologically complete words (e.g. sees). Given the flexibility of the LFG architecture, it is not necessary to postulate otherwise unmotivated c-structure nodes in morphologically rich languages where the morphology directly encodes much f-structure or relational information, and indeed the Principle of Economy of Expression states that all syntactic nodes are optional unless otherwise required for the satisfaction of semantic expressivity or other independent principles (Bresnan a). These assumptions, combined with the separation of c-structure from f-structure, makes it possible to represent the fact that different languages may express the same grammatical properties in very different ways. Thus, we might find two languages in which grammatical relations, for example, are encoded syntactically in one language (e.g. via the syntactic configuration of overt NP/DPs), and morphologically in another (e.g. via pronominal agreement morphology on the verb). In LFG this difference between the two languages would be captured at c-structure, while the similarity in function is captured at f-structure (see, e.g., Bresnan a; and Nordlinger and Bresnan  for detailed discussion, and the Chicheŵa example in ()). Work on the relative contributions of the morphology and the syntax to the f-structure has highlighted the interplay and competition between morphological and syntactic expression. Andrews () (also Andrews ) is an example of early work on this issue. In this work, Andrews proposes a notion of ‘morphological blocking’, whereby the existence of a more highly specified form in the lexicon precludes the use of a less highly specified form. For example, if the verbal paradigm includes an inflected form encoding

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

st person singular subject, then this form will ‘block’ the use of an unmarked verb form combined with a st person singular subject pronoun, even though the semantic content of the two constructions is apparently the same. Thus, this principle accounts for the Ulster Irish examples provided in () (from Andrews : ): ()

a. Chuirfinn isteach ar an phost sinn. put...S in on the job that b. *Chuirfeadh mé isteach ar an phost sinn. put. .S in on the job that ‘I would apply for that job.’

The ungrammaticality of the analytic form in (b) follows from the presence of a synthetic form specifying the same information. The Principle of Morphological Blocking (see Andrews :  for precise wording) states that a lexical entry whose associated f-structure subsumes that of another lexical entry is blocked (presuming, of course, that both are compatible with the sentence more broadly). Thus the more general verb form chuirfeadh () is blocked in the (b) example, given that the paradigm also includes the more specific chuirfinn ().3 ()

pred tense

‘cuir < ... >’ cond

()

pred tense

‘cuir < ... >’ cond

subj

pred pers num

‘pro’ 1 sg

The general theme that ‘morphology competes with syntax’ runs through much LFG work in the s and early s, especially that of Joan Bresnan (e.g. Bresnan , , a, b). It underlies one of the strengths of LFG, which allows it to easily deal with a wide range of morphosyntactic variation, from non-configurational languages, to headmarking languages, to highly syntacticized languages like English (see Nordlinger and Bresnan  for an overview). For example, at its simplest level, the differentiation of the c-structure from the f-structure in LFG allows for the fact that the following two sentences have completely different surface expression and completely different c-structures, but essentially identical f-structures. Both examples mean If I see him; in

Following LFG convention  (‘pronoun’) is used as the value of the  feature for pronouns in the f-structure. 3

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



Chicheŵa () this is expressed with a single verb, whereas in English () it is expressed with a whole phrase. Crucially, however, the relations encoded in the f-structure are the same. ()

ndi-ka-mú-ona subj-cond-obj-see ‘if I see him’ S ↑ =↓ VP ↑ =↓ Vʹ ↑ =↓

mode cond pred ‘see〈subj,obj〉’ subj

V

ndi-ka-mú-ona subj-cond-obj-see (↑ pred) = ‘see〈subj,obj〉’ (↑ mode) = cond (↑ subj pred) = ‘pro’ (↑ subj pers) = 1 (↑ subj num) = sg (↑ obj pred) = ‘pro’ (↑ obj pers) = 3 (↑ obj num) = sg (↑ obj nounclass) = 1a

obj

pred pers num

‘pro’ 1 sg

pred ‘pro’ pers 3 num sg nounclass 1a

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 ()

     ‘if I see him’ CP ↑=↓ C′ ↑=↓

↑=↓

mode pred

C

IP

subj

if (↑ mode) = cond

(↑ subj) = ↓

↑=↓

NP

I′

obj

cond ‘see 〈subj, obj〉’ pred ‘pro’ pers 1 num sg pred pers num

‘pro’ 3 sg

I ↑=↓ (↑ pers) = 1 VP (↑ num) = sg (↑ pred) = ‘pro’ ↑=↓

(↑ obj) = ↓

V′

NP

↑=↓

him

V

(↑ pers) = 3 (↑ num) = sg (↑ pred) = ‘pro’

see (↑ pred) = ‘see〈subj, obj〉’

The flexibility afforded by the LFG architecture in this respect underlies much of the work we discuss in this chapter, including the treatments of multiple exponence, auxiliary selection, and case stacking in Australian languages.

.. Overview of HPSG The foundational publications in HPSG are Pollard and Sag (, ). HPSG has clear origins in both Generalized Phrase Structure Grammar (GPSG) (Gazdar et al. ) and Categorial Grammar. There are a number of quite significant architectural differences

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



between the  (Pollard and Sag ) and the  (Pollard and Sag ) versions of HPSG.4 Since Pollard and Sag () there have been two further major developments. The first of these is that some work in HPSG separates constituent structure from the representation of the surface order of forms, taking a non-concatenative approach to linear order. This radical separation of constituent and linear structure originates (within HPSG) with Reape (, , ), and similar early proposals are found in Kathol (, ) (for implementations of this approach see also Müller , ). The essence of such domain-based or linearization approaches is that daughters within a constituent can be ‘liberated’ for linearization among the daughters of dominating constituents. These ideas have also found a place in the treatment of morphological phenomena, to which we return briefly below (and see Kathol ; Crysmann ). The second is an increasing convergence with the ideas of Berkeley Construction Grammar.5 The hallmark of such approaches is essentially the generalization of multiple inheritance type hierarchies from the lexicon to syntax, where constructions, or multidimensional collections of linguistic information, are expressed directly in the type system. There are in fact two major current variants of construction-based HPSG: constructional HPSG (Sag ; Ginzburg and Sag ) and Sign-Based Construction Grammar (SBCG) (Sag ). Both of these descend very clearly from the earlier HPSG of Pollard and Sag (, ) and share many features including the fundamental modelling assumptions (feature structures model linguistic objects, attribute value matrices are descriptions of model objects), and both reflect a degree of contact and convergence with Construction Grammar. In HPSG (including SBCG) feature structures model linguistic objects. Attribute value matrices (AVMs) are used to describe feature structures. Unlike LFG, HPSG analyses are expressed in a typed feature structure formalism: feature structures are grouped into classes instantiating linguistic types. Types are organized into a multiple inheritance hierarchy specified in the signature of the grammar: the most general type of words or phrases is the sign. Such inheritance hierarchies are used in HPSG (including SBCG) to capture syntactic and (some) lexical generalizations. Feature structures must instantiate a maximal type (that is, be fully specified), although of course descriptions will typically make significant use of underspecification. The use of typing in HPSG grounds many well-formedness constraints directly in the ontology in a way that does not occur in LFG. In (all versions of) HPSG (again, in contrast to LFG), one single data structure is used to model all dimensions of a linguistic object, leading to a highly structured representation. The details of the precise geometry has changed considerably over the years, but the following very schematic representations of word and phrase (based on Müller a) give the flavour, showing the parallel representation of phonological, syntactic, and semantic information. In the HPSG structures in this chapter, the italicized annotations at the top of the left brackets indicate the type of the feature description.

4

Although the later book generally supersedes the earlier, Pollard and Sag () contains an extensive discussion of lexical generalizations and hierarchies, which is of interest from a morphological and morphosyntactic point of view. 5 By Berkeley Construction Grammar we refer to the Construction Grammar approach associated most notably with Paul Kay and Chuck Fillmore and the Berkeley group (Fillmore et al. ; Kay ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

()

word phon synsem local cat … local content …. nonlocal …

synsem

()

phrase phon

< the man > word phon

< man > synsem

head-dtr synsem

local nonlocal

non-hd-dtrs

local cat … content …. …

word phon < the > synsem ….

The () feature is itself highly structured and represents both syntactic () and semantic () properties. The  feature is used in the description of various types of non-local dependency. The features - (‘head daughter’) and -- (‘non-head-daughters’) (in ()) model the constituent structure of linguistic objects, and are often omitted from illustrative AVMs in favour of the use (for illustrative purposes) of treebased representations.6 Good sources for detailed overviews of HPSG include Müller (a), Sag (), chapters  and  of Ginzburg and Sag (), and Kathol et al. ().7 The lexicon plays a most important role in HPSG as the treatment of many syntactic phenomena is lexicalized. The lexicon as a whole and descriptions of individual words

The version of HPSG outlined in Ginzburg and Sag () uses - and , where the latter also includes a sign token identical to the value of -. Nothing crucial follows from these small differences. The feature geometry of SBCG differs principally in that information about daughters is not represented internally to the sign but in constructs, feature structures with  (‘mother’) and  (‘daughters’) features, where  is a nonempty list of signs. Since these differences are not directly relevant to our concerns here, we will not discuss them further. 7 Where possible we have explained the intended ‘meaning’ of attribute names in the text. Attribute names are sometimes abbreviated in the AVMs for compactness. Abbreviations include  (),  (),  (), and  (). 6

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



(roots and stems) are highly structured. Vertical generalizations holding over the set of elements which are members of a specific word class or subclass are captured in the lexical inheritance hierarchy (a type system and associated constraints). Horizontal generalizations can be captured by lexical rules which are intended to capture relationships between lexical elements. The issue of how lexical rules should be conceptualized and formalized was the focus of considerable interest in HPSG from the early s, the central question being whether lexical rules should be viewed as meta-level or description-level statements. In the former case, they are seen as stating relations between lexical entries (that is, between descriptions of objects) (Calcagno ), and hence lie ‘outside’ the lexicon itself. In the latter case, they are seen as stating relations between the objects themselves (Meurers , ). A simple formalization of the latter (description-level) view is to express the relations within the typed feature structures in such a way that the ‘input’ is the  of the ‘output’ of the lexical rule (Krieger and Nerbonne ; Riehemann ; Meurers , ). In this way, a lexical rule can be thought of as a unary rule. Example () shows in very schematic form (following Müller a) a possible lexical rule format, in which the ‘output’ phonology is some function of the ‘input’ phonology. In () (and similarly in subsequent feature structures) 1 indicates token-identity. ()

lexical-rule phon f ( 1 ) synsem

.... phon

lex-dtr

1

synsem ..... stem

. B 

..................................................................................................................................

.. The representation of morphological processes ... In LFG In a lot of LFG research, there has been a tendency to provide lexical entries for morphemes, primarily as place-holders to reflect the fact that f-structure information is contributed by the morphology. Bresnan (a: ), for example, provides the lexical entry in () for the English third singular present tense verbal inflection: () -s:

(↑  ) =  (↑  ) = ↓ (↓  ) =  (↓  ) = 

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

Sub-lexical rules then constrain the ways in which lexical items of all types combine to form fully inflected words (cf. Selkirk ). Simpson (: ), for example, provides the following morphological rule for the Australian language Warlpiri that adds a case suffix (Aff) to a nominal stem (N1) to produce a nominal word (N) which can then be inserted into the c-structure:8 () N ! N1 ↑=↓

Aff ↑=↓

In more recent years some authors have explored what the morphological component might look like, including an integration of realizational morphology with LFG (e.g. Spencer ; Butt and Sadler ; Ackerman and Stump ; Andrews ; Nordlinger and Sadler ; Dalrymple ). In the computational implementation of LFG (XLE), a finite-state morphological component is used to manage inflectional morphology, which can accommodate various theories of morphology (e.g. Karttunen, Kaplan, and Zaenen ; Kaplan and Kay ; Beesley and Karttunen ; Karttunen ; Seiss ), and is therefore compatible with realizational, constructional, or morpheme-based theories of morphology. As discussed in §.., from the perspective of the syntactic component, the morphology is a ‘black box’: the c-structure of LFG takes fully inflected words as its terminal nodes, but does not impose any particular constraints on how these words have been composed in the morphological component. Thus, the framework of LFG is not committed to a particular theory of the morphology, and different researchers have taken different approaches. A number of authors have put forward proposals aimed at separating out certain properties into m(orphological)-structure, a morphosyntactic structure which models morphological well-formedness conditions. By assuming m-structure as the locus of languagespecific constraints on form, it is possible to maximize the extent to which f-structures are cross-linguistically similar. The m-structure proposals arose primarily out of work on compound tenses and the auxiliary system, in languages such as English with complex auxiliary selection facts. Early studies in LFG analysed auxiliaries as raising verbs, assigning them a  value, and introducing the main verb in a complement clause () (e.g. Falk ). Later analyses assume a flatter c-structure, with auxiliaries as non-subcategorizing elements which contribute functional information (e.g. tense, aspect) but without  features. On these analyses, the main verb is the functional head of the clause, with the auxiliary providing grammatical information only (e.g. King ; Butt, Niño, and Segond ; Bresnan a). Among many arguments in favour of the flat analysis is the fact that it allows for a consistent analysis of constructions with similar functions cross-linguistically: the f-structure will be essentially the same irrespective of whether a language expresses its past perfective with a synthetic verb form, or with an analytic structure consisting of an auxiliary followed by an infinitive verb, for example. The positing of a separate m-structure allows for the idiosyncratic morphological facts associated with the past perfective 8

Simpson’s rule is slightly more complicated than shown here, but the details are not relevant to the point at hand.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



construction in each language to be captured while maintaining the cross-linguistic similarity at f-structure. Thus, it is at m-structure that information about combinatory possibilities is encoded (e.g. the fact that in English the have auxiliary requires the following verb to be in the ‑en form—had eaten—while the be auxiliary requires the ‑ing form, as in had been eating). The early proposal by Butt, Niño, and Segond () introduces a mapping μ from nodes of the c-structure to elements of the m-structure, in addition to the function ϕ mapping c-structure nodes to f-structure. The form requirements within the verbal complex are stated not in f-structure, but at this level of morphosyntactic structure. This allows a clear separation of aspects of surface exponence from more cross-linguistically invariant aspects of temporal and aspectual specification.9 ()

Kim has been running.

pred ‘run< subj >’

()

subj f:

aspect

pred ‘kim’ prog + perf +

m:

vform fin vform pastpart dep dep vform prespart

tense pres The relationship between the verbs in the c-structure and the corresponding m-structure is captured by the following phrase structure rule in which μ is the mapping from nodes of the c-structure to elements of the m-structure, (^ ) denotes the immediately dominating c-structure node, and (*) denotes the current c-structure node. Thus, () states that the m-structure associated with the V0 is equated with the m-structure associated with the V, whereas the m-structure associated with the VP complement is associated with that of the  in the m-structure. See Dalrymple (: –) for further discussion. () V0 ! V ↑=↓ μ( ^) = μ(*)

VP ↑=↓ μ( ^ ) = μ(*)

Frank and Zaenen () make different assumptions about where the m-structure fits in the overall architecture. Working on the assumption that case and agreement features should also be represented at m-structure, they point out some issues with the architecture assumed by Butt, Niño, and Segond (). Their main focus is past participle agreement in object relative clauses in French: the participle shows agreement with its  just in case it is preceded by it, a situation which arises in relative clauses. In this case, the participle must in fact have access to the agreement features of the relative pronoun, which agrees with

Here  is an abbreviation for ‘dependents’, and  for ‘verb form’.  and  refer to ‘past participle’ and ‘present participle’, respectively. 9

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

the head noun and may be unboundedly distant—an example is (b) (from Frank and Zaenen ): ()

a. Les enfants adorent les histoires qu’on leur a déjà racontées mille fois. Children are fond of the stories that one has told them already a thousand times. b. Les enfants adorent les histoires qu’on sait bien qu’on leur a déjà racontées mille fois. Children are fond of the stories that one knows perfectly well (that) one has told them already a thousand times.

Frank and Zaenen () argue that in the architecture of Butt, Niño, and Segond () long-distance functional uncertainty statements over the m-structures (which essentially recapitulate the hierarchy of the f-structure) are required to capture such long-distance agreement dependencies. Instead they propose a different architecture in which μ is projected from the f-structure, so that morphological constraints can be stated locally: the m-structure is not connected, but simply encodes morphosyntactic aspects of that piece of f-structure. A simple example of subject–verb agreement on this view is illustrated with the lexical entry for the French verb tournera in (), which shows the distribution of information to the f-structure (encoded with ↑) and the m-structure (encoded with ↑μ).10 () tournera: V (↑ ) = ‘ ’ (↑μ ) =  (↑μ ) = + (↑ ) =  (↑ μ ) =  (↑ μ ) =  Sadler and Spencer () expand on the m-structure analyses to propose a distinction between morphological features (m-features) and syntactic features (s-features), where the latter are the more familiar f-structure attributes while the former are those features which regulate morphological form in the m-structure. While the mapping between the two will be straightforward in many cases (such as when a verb form realizing the m-feature [Tense: Past] also contributes the f-structure information  ), the distinction between the two types of features allows for an account of mismatch cases, where a verb marked with present tense morphologically is involved in a construction which means simple past, for example (Sadler and Spencer ). In a recent paper Dalrymple () provides an explicit proposal for integrating the morphological component in the LFG architecture, incorporating much of this previous work.

10

Another related proposal is that of Falk () who argues for a level of grammatical marking structure projected from f-structure.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



... In HPSG A cornerstone of lexicalist theories such as HPSG (and LFG) is the clear separation of syntax from morphology: in particular, syntactic machinery is not extended into inflectional morphology, and the principles which govern syntax are not assumed to extend to inflectional or derivational processes.11 The overwhelming majority of work in HPSG inflectional morphology espouses a realizational approach (such as that of Paradigm Function Morphology (PFM) (Stump )) in which affixes are not themselves signs and affixation is carried out by morphological functions. However, alternative approaches are also found (e.g. the morpheme-based work of van Eynde  and the constructional approach of Riehemann , ). Providing an appropriate marriage of realizational morphology (such as PFM) with HPSG poses many non-trivial issues, leading to a rich vein of work on this topic, including Bonami and Boyé (, ), Crysmann and Bonami (, ), and Bonami and Crysmann (). Among the issues relevant to a treatment of morphology which have been addressed to a significant extent in the HPSG literature are the following: (i) the nature and status of lexical rules; (ii) the role and status of the basic ontology, with respect to issues of morphological productivity, and the possible role of online type creation; (iii) the delimitation of syntax from morphology and the treatment of interface phenomena (e.g. status of ‘clitics’); (iv) the role and nature of defaults; and (v) the development of a sign-based approach to morphophonology. We discuss some of these in the relevant sections below.

. T   

..................................................................................................................................

.. Word formation ... In LFG Word formation and compounding in LFG are primarily lexical processes and are therefore dealt with in the morphological/lexical component, rather than the syntax. Since Bresnan’s early work on the passive construction (Bresnan b), the standard LFG approach has been to assume lexical redundancy rules that introduce lexical alternations in predicate– function mappings. The ensuing syntactic differences then result from the interaction of these alternative predicate–function mappings with regular syntactic principles (such as completeness, coherence, and function–argument bi-uniqueness) (see Bresnan a: ch.  for detailed discussion). This approach follows from the standard assumption in morphology (e.g. Aronoff , ) that processes such as derivation and compounding are morpholexical and that, therefore, the inputs to these processes must also be morpholexical. From this it follows that relation-changing processes such as passivization, causativization, applicativization, and so forth, that can be shown to be inputs to lexical

For example, Sag (: ) observes: “I assume that a largely autonomous set of constraints characterize the relation between the phonological and morphological aspects of signs.” 11

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

processes of derivational morphology in some languages (such as nominalization, e.g.), must also be formed in the lexical component (Bresnan a: ).12 Bresnan and Moshi () show how the formation of passives, applicatives, and reciprocal verbs in Bantu languages such as Kigaya and Chicheŵa can be captured in terms of morpholexical operations on argument structure which suppress, add, or bind roles. These operations can be considered to be associated with the corresponding derivational morphology so that they are unified with the verbal argument structure on affixation (see also Bresnan and Kanerva ). The passive operation, for example, suppresses the highest argument in the verb’s argument structure, which prevents it being linked to the  grammatical function and results instead in the linking of the patient argument to  (via regular argument linking principles) (see Dalrymple : ch.  for detailed discussion of this aspect of the theory). A similar approach to verb derivation is found in the analysis of Chicheŵa applicatives by Alsina and Mchombo (), and in the approach to applicatives in complex predicates in Murrinhpatha (Australia) taken by Seiss and Nordlinger (). Baker et al. () also use a lexical alternation rule in their account of the external possession incorporation construction in Wubuy (Australia) and its interaction with noun incorporation.

... In HPSG A major theme in HPSG work concerns the nature and role of the lexicon and the mechanisms by which lexical generalizations (and hence also derivational relations) between lexemes may be most adequately modelled. In early HPSG and related work, lexical rules were essentially external operations or functions applying to feature structures (Pollard and Sag ; Flickinger ). An important strand of work over the last twenty or so years has addressed in various ways the issue of bringing lexical generalizations ‘inside’ the basic architecture (Meurers , ). In early work Krieger and Nerbonne () (also Krieger and Nerbonne ) outline an approach to both inflection and derivation using complex feature descriptions. The basic proposal is to interpret a lexical rule as an information-bearing object “indistinguishable in its form and meaning from other entries of the lexicon”. The leading idea is that derivation is within the lexicon, not an external process which then populates the lexicon. They propose an approach in which complex morphs (i.e. derived words) have - and - features to encode morphological structure, regulated by constraints over the morphological structure (e.g. constraints over the order of  and , feature inheritance principles, subcategorization, and so forth). In related work, Krieger () proposes a morphemic, word-structure approach to the derivation of ‑bar adjectives and vor- prefixation in German, in which the morphotactics are handled by attributes representing internal structure ( and ) within a  attribute, and realization is handled by a realization function which applies to the attribute |. The basic geometry for word formation is shown in (). A slightly 12

In fact, argument structure and LFG’s lexical mapping theory have been the subject of a large amount of research and debate in recent years, and there is not space in this chapter to do it justice. For further discussion see, among others, Asudeh, Giorgolo, and Toivonen (), Müller and Wechsler (), and the enormous array of references provided in Bresnan et al. (: –).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



simplified version of the feature description for ‑bar on this approach (which revises Krieger and Nerbonne ) is given in (), showing the inheritance of the valence information from the verbal stem to the derived adjectival form.13 ()

complex-word morph word-morphology syn word-syntax sem word-semantics affix-word-structure dtrs

affix affix word part-of-speech

()

bar-suffix affix-morphology morph

form -bar bar-verb subcat

syn|loc|subcat

head| maj syn|loc

subcat

obj comps

1 2

a subj comps

1 2

The account of derivational morphology in Krieger () depends on having feature structures for affixes. An alternative aimed at avoiding the phrase structure (word syntactic) aspect of Krieger and Nerbonne () and Krieger () is Riehemann (), which develops an approach in which the formative ‑bar is not a suffix with its own lexical entry, or simply phonological material, but is represented essentially as a schema in a type-based approach to derivational morphology. Important considerations underlying this work are to capture both the very productive fully regular derivational process and the subregularities (for example, ‑bar adjectives from verbs with dative objects such as unausweichbar ‘inescapable’). Her approach uses only monotonic multiple inheritance (i.e. it does not have recourse to defaults). She posits a productive maximal subtype of trans-bar-adjective to model the productive regular case and a number of lexicalized maximal types for other ‑bar adjectives. The idea is that any appropriate stem (of type trans-verb) can be used with the productive reg-bar-adj type, shown in () (Riehemann’s original also contains constraints 13 Subcategorization requirements are encoded by the feature  in () and the feature  in ().  stands for complements. The feature () encodes parts of speech information. In later work, the feature - lists the syntactic arguments of a head.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

over the  feature, representing the semantic argument structure, but we abstract away from these details here for simplicity and greater readability).14 On this approach, derivational affixes do not have lexical entries. In () and elsewhere  indicates the relational constraint append, used to concatenate two lists. ()

reg-bar-adj 1 + bar

phonology

trans-verb morph-b

phon

1

synsem|loc

cat|val|comps ⊕ 3

head adj synsem|loc cat

valence

subj comps

3

The issue of how morphological productivity can be captured is discussed in Koenig and Jurafsky () who observe that Pollard and Sag (, ) use a compiled-out type hierarchy to capture the common properties of words and lexical rules for productivity, and propose instead to underspecify the type system to deal with lexical productivity. They store a type for each root and for each productive morphological template and propose an algorithm for building types for surface forms, called online type construction (OLTC). Koenig () provides extensive further discussion of this issue within an HPSG context. The fundamental idea is that the (lexical) type hierarchy is organized into essentially orthogonal, conjunctive dimensions: maximal (leaf) types are inferable (by OLTC) by inheritance from one maximal (leaf) type in each dimension. This cross-classificatory approach using the intersection of leaf types captures horizontal generalizations across the lexicon. The relatively small literature on derivation and compounding in HPSG includes Desmets and Villoing (), which offers a morphological approach to French VN compounds such as tournevis ‘screwdriver’ and grille-pain ‘toaster’. They use the type hierarchy in () (Bonami and Boyé ) where objects of type lex-sign (words and lexemes) have an attribute - with values of type lexeme, and lexeme has the attribute  with a value of type stem-space. ()

sign syn-sign phrase

lex-sign word

14

lexeme

Riehemann’s - attribute stands for morphological bases.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



Desmets and Villoing () extend this type hierarchy to allow for lexemes with ‘complex morphology’ by introducing an additional dimension of classification for lexemes called . Lexemes which are morphologically complex (of type morph-complex-lex) are subclassified into compound, derived, and converted subtypes. The essence of the approach to the representation of compounds can be ascertained from the following constraint on the type vn-lexeme, which specifies that a Verb–Noun compound (in French) is syntactically a Noun formed by combining a verbal and a nominal stem in the morphology. ()

stems

slot1

synsem vn-lexeme →

3 ⊕ 4

cat noun ... cont v-lex

stems

slot1 3

synsem

...

m-dtrs n-lex stems synsem

slot1 4 ...

.. Inflection ... In LFG As discussed in §.., the framework of LFG is not committed to a particular treatment of inflection, and thus there has been little LFG work on a general theory of inflectional morphology. However, the flexibility of the LFG architecture has enabled interesting accounts of certain inflectional phenomena that have been less straightforwardly handled in other frameworks. In this section we focus particularly on multiple exponence and constructive morphology.

....   Cases of multiple exponence, especially those where the same inflectional category is marked across multiple words in the clause, can be problematic for approaches which hold that inflectional morphology is related to abstract functional nodes in the syntax (see Niño  for a more detailed discussion of such approaches and the difficulties). The unification-based nature of LFG, and the fact that all morphosyntactic information unifies into the same clausal f-structure, means that the number of times that compatible information can be contributed (and unified) is limitless, and will always result in the same f-structure. Thus, cases of multiple exponence are not only straightforwardly captured in this framework, but require no additional mechanisms or assumptions; the multiple instances of the same inflectional feature are simply unified in the f-structure, as shown in the Finnish example below (from Niño ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

()

e- n puhu-nut - speak-. ‘I did not speak.’

ip

()

i en

vp pred ‘speak’ tense past pol neg

v

1sg puhunut

pred ‘pro’ pers 1 num sg

subj

1sg

In the Finnish example in (), we see that both the negative polarity item and the verb carry inflection encoding singular number for the subject. In the LFG analysis of this sentence, a simplified form of which is shown in (), this information is unified with the same f-structure associated with the subject in both cases. Thus, the appearance of the singular subject inflectional feature on multiple parts of the clause does not change the f-structure of the sentence, but is an issue of relevance for the morphological component only. If the morphology generates words of different word classes which mark the same inflectional feature, this situation will be accommodated by the syntactic component. In contrast, syntactic models of inflectional morphology treat each inflectional feature as being associated with a functional head, and thus multiple instantiations of a single feature can only be accounted for through other processes such as feature percolation or copying (e.g. Mitchell ). Nordlinger and Bresnan () show that this general approach can also be extended to account for situations in which distinct, yet compatible, information is contributed by different parts of the clause, and unified at the f-structure. They discuss the case of distributed tense/aspect/mood (TAM) marking in the Australian language Wambaya. In Wambaya, TAM is marked on both a second position auxiliary and on the verb. These two elements need not be contiguous in the clause, since Wambaya clauses have grammatically free word order (apart from the second-position auxiliary). Crucially, however, the TAM information contributed by each element is not identical; rather the two interact to mutually determine the TAM for the clause. According to the Nordlinger and Bresnan () analysis, this is accounted for by treating the categories of tense and mood in Wambaya as composites of three primitive binary features, as follows: ()

past

a. past:

future

+ –

uncertain –

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

     b.

past

present:





future



uncertain – c.

past



future

future:

+

uncertain + past

d.



future

imperative:



uncertain +

The TAM inflections on the auxiliary and verb each encode (partial) combinations of these features which then combine in the clausal f-structure to fully specify the TAM value for the clause as a whole. An example (taken from Nordlinger and Bresnan ) is provided in (); for a more complete discussion of the analysis see Nordlinger and Bresnan (). ()

a. Ngawu nyu-ng-u 1sg

ngaj-ba.

2sg.a-1.o-[past–,future+] see-[uncertain+]

‘You will see me.’

Future tense

b.



IP (↑ foc) = ↓

↑=↓

NP



tense …

↑=↓

↑=↓

↑=↓

N

I

S

ngawu

nyu-ng- u

↑=↓ V ngaj- ba

past – future + uncertain +

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

Crucial to this analysis is the LFG notion of co-heads (Bresnan a), and in particular the fact that complements of functional categories are f-structure co-heads. It is this principle which determines that both I and V (via S) in () are f-structure heads (indicated with the annotation ↑ = ↓), and therefore their f-structures are identified with the f-structure of the clause as a whole. This allows for them to each provide partial information about the clause-level property of TAM, which information is then unified into the clausal f-structure.15

....   The ‘constructive morphology’ approach grew out of Nordlinger’s () analysis of case stacking in the Indigenous languages of Australia. In this approach, Nordlinger argues that the case morphology in these largely non-configurational languages encodes information directly about the larger syntactic context in which the nominal appears. Inflecting a nominal with ergative case, for example, does not just mark the nominal as having ergative case, but also encodes the fact that the nominal is functioning as the transitive subject in the larger clause. Thus, on this view, case markers do not just reflect grammatical relations, but actually play a central role in constructing them. Nordlinger () shows how this approach can provide a natural and straightforward account for a range of complex morphological phenomena in these languages that are challenging for other theoretical frameworks, such as multiple case marking (case stacking) and the use of case morphology to encode clausal tense/aspect/mood (on this, see also Nordlinger and Sadler ). As a simple illustration of the model, consider the Wambaya ergative nominal galalarrinyi-ni ‘dog-’. A traditional approach to case might assume that the ergative case marker here contributes the case information  = , and that elsewhere in the grammar this information will interact with the argument structure of the verb to ensure that the nominal is only licensed as a transitive subject. On the constructive case approach, however, the ergative case marker constructs the larger f-structure in which this nominal belongs, specifying that the nominal must hold the grammatical relation of  in a transitive clause. The information associated with the ergative case marker is shown in (), and the f-structure constructed by the whole nominal galalarrinyi-ni is shown in ().16 ()

: (( ↑) ) (↓ ) = 

()

pred ‘dog’ subj

case erg

obj [ ]

15

See Sells () for discussion of multiple exponence in Swedish and its analysis within OT-LFG, a combination of Optimality Theory with the framework of LFG (Bresnan b). 16 Note that Nordlinger’s original analysis assumes a morphemic morphological approach, but Nordlinger and Sadler () show how it can also be cast within a realizational morphology.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



The idea that pieces of case morphology can contribute information to the larger f-structures within which they are contained (enabled by LFG’s inside-out function application (e.g. Halvorsen and Kaplan ; Dalrymple )) allows Nordlinger () to account for the phenomenon of case stacking (see also Andrews ) and opens up possibilities for such morphology to encode information about other aspects of this broader syntactic context, such as clausal tense/aspect/mood (Nordlinger ; Nordlinger and Sadler ) or interclausal relations (Nordlinger ). This idea has also been extended to other morphological contexts as well, such as pronominal clitics and phrasal affixes (O’Connor ) and word order freezing (Mahowald ).17

... In HPSG From the outset there has been a fairly general consensus in favour of adopting broadly inferential-realizational approaches to inflectional morphology in HPSG, although of course the framework itself does not preclude the development of other theoretical approaches (see, e.g., the remarks in Krieger and Nerbonne () on the advantages of adopting a ‘lexemic’ view over a ‘morphemic’ view, and similar views in Miller and Sag ).

....  Krieger and Nerbonne () present an early attempt to define the central notion of paradigm in feature structures directly associated with syntactic information. Their approach makes use of defaults in the lexicon. They express paradigms directly in feature structures by defining them as disjunctions and using distributive disjunction to link the alternation on morphosyntactic features to that in formal expression. In () for the present tense weak paradigm of German, p is the name of the distributive disjunction which associates pairwise elements from the  and  (‘agreement’) attributes. ()

stem morph

2 3 {p1‘e’, ‘st’, ‘t’, ‘n’ ,‘t’, ‘n’ }

ending form

2 ∧ 3

syn|local|head|agr

p1

per 1st num sg

per 2nd num sg



per 3rd num pl

....   The issue of expressing PFM-inspired analyses in HPSG is the focus of a number of recent papers. Bonami and Boyé (, ) address the issue of inflectional irregularity and in particular how the representation of multiple (morphomic) stems (for a lexeme) should be

17

In contrast, Malouf () argues that HPSG is able to account for the Australian case stacking data without the use of inside-out function application by assuming a concord constraint that recursively propagates the case feature of a head onto all of its dependents (which must necessarily include adjuncts as well).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

accommodated in the HPSG architecture. A brief outline of Bonami and Boyé () gives the flavour of this strand of work. They propose that the feature  is appropriate for members of the type lexeme (the relevant portion of the type hierarchy is given in () above). Classes of lexemes default to showing a regular stem pattern, while irregular lexemes are lexically specified with the appropriate  phonologies, as shown in () for the (French) verb valoir. The constraint in () specifies that elements of the type lexeme have a  attribute (with a value of type stem-space) while () states that the value of  defaults to the type regular (the default value is preceded by ‘/’), in which all the stem forms are in fact identical. Note that attributes under  have phonological forms as values (here val and vo), and hence the treatment is entirely morphomic. Example () illustrates the approach, showing the first person present indicative inflectional rule, which is realized by appending ɔ˜ to the phonological form which is the value of  in the  feature structure. ()

lexeme ! [ stem-space]

()

verb-lexeme ! [

()

valoir:

()

stems

/regular]

slot2

val

slot3

vo

word phon 1 ⊕ ~c

prst-indic-1pl → synsem

m-dtrs

head

verb tense mood

subj

present indicative

v-lexeme stems|slot1

1

The use of default values as in () is not simply an abbreviatory device permitting compact statements of generalizations: as Bonami and Boyé () point out, it is intended to constrain the members of an open lexicon, so that new or unknown verbs will be inflected as regular items.

....    Crysmann and Bonami () address the issue of encoding an inferential, realizational treatment of variable morphotactics in HPSG (and see also earlier, related papers on this topic: Crysmann and Bonami ; Bonami and Crysmann ). The approach is templatic, differing crucially from the inferential, realizational approach of PFM in that it characterizes the placement of morphs without reference to the stem, and hence eschews

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



the organization of realization rules into successively applied rule blocks. Generalizations over classes of rules are expressed by organizing realization rules into inheritance type hierarchies.18 The templatic approach allows a simple treatment of a range of departures from canonical (stem-centric) placement, including misplaced alignment where elements which are in opposition occur in different linear positions, conditional placement where the placement of a morph is dependent on the presence of other morphosyntactic features which it does not express, free ordering, partial ordering, and cases where shared forms are positionally disambiguated (such as the Swahili subject and object markers which are largely identical but appear in different positional slots). The fundamental units of description are morphs or segmentable formatives, associated with position class and phonological information. Realization rules are expressed as typed feature structures. A rule typically encodes what features it realizes and specifies its form: the attribute  stands for morphology under discussion and  encodes the phonological form () and the position class () information of the morph. A crucial component of the approach is the distinction between conditioning and expression, so that rules can also impose constraints on features which they do not realize, by means of the  feature which captures any such (co-ocurrence) constraints on the morph. For example, in the context of negation, past tense in the Swahili verb is realized by a morph ku in position class  while it is realized by li in other contexts. Example () shows the negative (context) allomorph, where [ denotes set union. ()

mud {past } ms {neg } ∪ set ph mph pc 3

Portmanteau forms are captured by associating a set of features (as value of ) with a single exponent and null exponence by assuming a non-realization default for a morphosyntactic property. Discontinuous exponence is exemplified by () for regular negation in Chintang, which involves the circumfix ma- . . . -yokt. ()

mud { neg } mph

18

ph pc

1∨2∨3

ph pc 5

In fact, to deal with cases of horizontal redundancy such as the systematic relationship exhibited by the Swahili subject and object marker paradigms, which are mainly identical but in different position classes, Crysmann and Bonami () additionally use online type construction (Koenig and Jurafsky ) which involves a closure operation on a type underspecified hierarchical lexicon partitioned with conjunctively interpreted dimensions.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

In this approach, rules pair morphosyntactic properties and sets of exponents (building on Crysmann ) in a flat structure of segmentable morphs, rather than incrementally adding exponents to a stem. Stem introduction rules introduce the stem, associated with a particular templatic position. For example, () (for the Swahili verb) places the stem shape associated with (the lexical identity of) a lexeme in position class . ()

mud

mph

lid stem 1 ph

1

pc

6

Two principles of well-formedness constrain the type word. Inflectional morphology is represented as the value of the attribute  which encodes the relation between the MS set, the set of realization rules () and the set of morphs () indexed for position. Morphotactic well-formedness is ensured by a constraint on the type word shown in (). The MS value of the word is the union of the  values of the morphs, ensuring that every morphosyntactic property is realized, that is, it ensures that the morphosyntactic features expressed by the rules must match up to produce the property set of the word.19

mph

()

ms word → rr

e1 ∪ … ∪ en 0 ( m1 ⊎ …. ⊎ mn ) mph

e1

mud

m1

ms

0

en

mph ,…, mud ms

mn 0

In terms of exponence, morphs are required to occur in the order given by their position class indices (which are a property of morphs rather than of rule blocks in this approach): this is captured by two further constraints, constituting the Morph Ordering Principle (MOP). Example (a) requires the phonology of the word to be the concatenation of the phonologies of the set of morphs while (b) requires position class order to be respected.

19

⊎ is non-trivial set union and ensures that each property is contributed only once.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

     ()

a.

ph word →

b. word → ¬

mph



1 ⊕ ... ⊕ n ph

1 ,.... ph

n

ph list ⊕ l ⊕ r ⊕ list ph l ph r mph , pc n ,... pc m

^m≥n

....      Some work in morphology in HPSG makes use of the additional flexibility in the relationship between the surface string and the surface constituent structure afforded by making use of a non-concatenative approach to linear order known as domain or linearization theory in HPSG (e.g. Reape ; Kathol , ) (and see in this connection the distinction between phenogrammar and tectogrammar discussed in Dowty ). Analyses adopting this (powerful) flexibility add a list-valued feature to sign representing its word order domain, and allow elements to ‘escape’ from the domain of their local constituent subtree, so that linear order does not (necessarily) reflect the yield of the constituent structure. This provides, for example, an approach to discontinuous constituency and to free word order phenomena.20 To illustrate the application of domain theory to morphology we briefly discuss Crysmann’s () analysis of Polish floating affixes in linearization HPSG.21 Examples ()–() illustrate the basic phenomenon. The past tense agreement marker shows evidence of affixal status in the case of the agreeing form of the participle in (), but word-internal morphophonological effects (effect on lexical stress, raising of o to ó ([u]) in word final syllables before voiced consonants, and yer vocalization (Booij and Rubach )) are absent when the agreement marker is preverbal, as in (). The absence of these word-internal effects is unexpected if the past tense agreement marker is an affix. () (ty) widział-eś tę książkę. you see- this book ‘You saw this book.’ () Daleko-m poszła. far- went ‘I went a long way.’ The fundamental proposal of Crysmann () is that the rules of morphological exponence treat the realization of agreement uniformly but the mapping of phonology to domain object permits the ‘affix’ to linearize separately. Polish past tense verbs can contribute more than one domain object to linear domain structure. Effectively, past tense agreement is 20

Reape’s () original order domains contain signs. Other proposals have sought to restrict order domains to smaller structures (see Kathol ; Kathol and Pollard ). 21 For other work on Polish affixes, see also Borsley (), Kupść (), and Kupść and Tseng ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

treated as a morphsyntactic hybrid in that the agreement marker is syntactically visible ‘floating’ phonology.22 The supertype of the stems with this type of mobile morphology, such as the past (participle) ƚ stem, does not fix the order of the stem and affix in the s list, as shown by the use of shuffle ( ⃝ ) in (), where pst-agr (for past tense agreement markers) is the most general type corresponding to the inventory of person/number markers—for example, it is a supertype of the st sing marker [  < m > ]. ()

morph

stem head verb

⃝ list(pst-agr)

Linearization in HPSG is extended in this analysis to permit words to project more than one single domain object (see Kathol , and Crysmann  for this idea). Lexical integrity is preserved since it is only the phonological contribution (of the morph) which may ‘float’ beyond the word. Where morphologically attached agreement markers are inseparable from the stem, a constraint expresses this restriction by requiring the lexical  (domain) list to be of length .

. I  S

.................................................................................................................................. As we have made clear, in both LFG and HPSG morphology and syntax are separate and autonomous subsystems in these frameworks, with the interface between them regulated by lexical integrity. Many of the issues discussed so far in this chapter relate to the morphology–syntax interface. In this section we touch (briefly) on work in both frameworks concerning clitics and edge inflection.

.. In LFG Sadler () shows how the framework of LFG can be used to capture Spencer’s () insight that non-syllabic reduced auxiliaries in English are more appropriately treated as affixes, while the syllabic reduced auxiliaries are clitics. Examples using the auxiliary will are given in () and (); analogous facts are also found with would, and tensed forms of be and have: ()

Mary’s flu’ll (*/l/, /əl/) be gone by tomorrow. John and Sue’ll (*/l/, /əl/) be singing all day long. The boy who’s laughing’ll (*/l/, /əl/) go to the party.

The past tense markers m, -ś, -śmy, ście are treated as exponents of agreement (and not as tense markers). 22

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

     ()



You’ll (/l/) be able to go home at two o’clock. I’ll (/l/) be leaving tomorrow.

The syllabic reduced forms in () behave like clitics: they attach phonologically to the final element of the preceding constituent, without showing any (lexical) selection as to their host. In LFG such clitics are treated as syntactic terminals in the c-structure (just like the corresponding full auxiliary), with their particular phonological properties a matter for the interface between syntax and prosodic structure.23 The non-syllabic forms in () however, are quite different, and can be shown to behave like inflectional affixes (bound morphs) rather than clitics. The evidence is laid out in detail in Spencer () but, in brief, amounts to the following: (i) the non-syllabic forms can attach only to non-coordinate pronominal subjects (as in ()) and are therefore highly selective as to their ‘host’ in a way that is expected of affixes, but not clitics; (ii) word-internal phonological processes apply within the ‘pronoun + non-syllabic reduced auxiliary’ unit suggesting that it is a single morphological unit; (iii) the stem to which the non-syllabic auxiliary attaches shows stem allomorphy that is not predictable on phonological grounds (e.g. we’ll /wi:l/ becomes [wl]), again suggesting word-internal structure as opposed to a post-lexical clitic; and (iv) the fact that the non-syllabic auxiliary cannot scope over a coordinated subject, whereas the syllabic (clitic) auxiliary can, is behaviour that we would expect of an affix that is combined within the morphological component rather than the syntax. Sadler () shows how these tense-inflected pronominals can be given a straightforward account within LFG using inside-out function application (as we saw in the discussion of constructive case in §....). The inflected pronoun you’ll (as in You’ll like it) has the lexical entry shown in (), corresponding to the f-structure information shown in (). As shown in this f-structure, the inflected pronoun contributes both information about the subject of the clause and tense information to the clause itself, thus allowing for the clausal contribution of the non-syllabic auxiliary to be contributed to the f-structure without violating lexical integrity. ()

you’ll (↑ ) = ‘’ (↑ ) =  (( ↑) ) = 

()

tense fut pred ‘pro’ subj pers 2

The basic c-structure () and f-structure () for the full sentence You’ll like it are given below.

23

See Butt and King () and Bögel et al. (), among others, for a discussion of prosodic structure in LFG and the treatment of clitics.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



     IP

()

(↑ subj) = ↓ DP

↑=↓ Iʹ

↑=↓

↑=↓

D

VP

you’ll

↑=↓

(↑ obj) = ↓

V

DP

like

↑=↓ D it

()

pred ‘like < subj, obj >’ tense fut subj

obj

pred ‘pro’ pers 2 pred ‘pro’ pers 3 num sg

An alternative approach to the affixal nature of non-syllabic reduced auxiliaries in English is provided by Wescoat (), who develops a model of ‘lexical sharing’ within the LFG framework which allows a single word to coinstantiate more than one adjacent terminal node in the c-structure. Wescoat applies this approach to a number of different morphosyntactic phenomena. Broadwell () also uses it to account for suspended affixation phenomena in Turkish. There are clear conceptual similarities between lexical sharing and approaches to co-analysis in other models, such as Sadock (), Lapointe (), or, within HPSG, Crysmann (). Clitics and related phenomena have been the subject of a large amount of work in LFG, and space limitations preclude us from discussing it all here. See, for example, Grimshaw (), Sadler (), Sharma (), O’Connor (, ), Luís and Sadler (), Luís (), Luís and Otoguro (, , ), Bögel (), Spencer and Luís (a), Lowe () and the references therein.

.. In HPSG In this section we very briefly discuss two issues: (i) the affixal treatment of what are pretheoretically described as object clitics, and (ii) the treatment of  phenomena. A substantial body of work in HPSG argues for affixal (word-internal) treatments of various pronominal ‘clitics’, following the early influential analysis of French pronominal clitics as affixes developed in Miller and Sag (), building especially on the work of

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



Miller (). Miller and Sag () draw a distinction between plain-words and cliticizedwords, the latter being inflected words which also realize (at least) one argument affixally, with concomitant changes in the word’s valency requirements, shown in (). ()

morph

form

fpraf ( 0 ,....) 0

i-form

head

subj

val

synsem|loc|cat

verb comps

2 3 list(non-aff)

(2 ⊕ 3)

arg-st

nelist(aff)

The function  spells out the form of the inflected word on the basis of the - value (the ‘classic’ inflectional form, without the pronominal affix), the  value and the - value. The approach is modelled on PFM although the morphological details are not fully specified. Similar accounts have been developed for other Romance clitics (see Monachesi ; Bildhauer , among others). Tseng () argues that the sandhi phenomenon of consonant liaison in French is subject to a range of lexical, syntactic, and stylistic considerations rather than being a purely phonological process.24 The target (adjectival) forms are treated allomorphically in a paradigm-based approach (Bonami and Boyé , ), such that the forms are distinguished in terms of a | +/ feature in the morphosyntactic feature set. Trigger (versus non-trigger) words are distinguished in terms of a liaison-trigger feature—words are marked as | + if their left edge can trigger liaison. Because contexts are syntactically conditioned, propagation of these features in the syntax (as edge features) is required: for example, the + liaison allomorphy of the masculine singular adjective grand ‘large’ is required in a liaison context (e.g. before appartement ‘flat’), even if the adjective is embedded in an AP such as très grand ‘very large’. The essence of the account is the following. Values of the  feature are propagated in the syntax by virtue of the Edge Feature Principle in (), which makes reference to the surface order (via ). A constraint which applies to all phrases then ensures the realization of liaison, specifying that an element with the feature |: + must be immediately followed by an element with the feature | +.25 ()

edge phrase ⇒

left right

domain

24 25

1 2

edge|left

1

,…, edge|right

2

An alternative, purely phonological, account in HPSG is developed in Asudeh and Klein (). See also Miller’s () treatment of the definite article as a phrasal affix.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



    

Samvelian and Tseng () discuss mobile object ‘clitics’ in Persian, and build on both Miller and Sag’s () approach to pronominal affixes and Tseng’s () use of  features. The object ‘clitic’ in Persian is relatively mobile, permitting realization on a range of hosts to the left of the verb. Samvelian and Tseng () argue that it is a suffix permitting a degree of promiscuous attachment to a range of different hosts. The basic generalization which they put forward concerning the distribution of the preverbal ‘clitic’ is that it is hosted by the least oblique complement of the verb (when not hosted by the verb itself). An example, with the pronominal object realized on the PP dependent, is shown in (b). () a. [PP ru-ye miz] gozâšt-im-aš. on- table put-- ‘We put it on the table.’ b. [PP ru-ye miz]-aš gozâšt-im. on- table- put- ‘We put it on the table.’ In such cases, the affix does not bear any syntactic argument relation to the host on which it is realized. A fully lexical treatment of these cases of phrasal affixation is afforded by separating the morphological effect (of suffixation) from the syntactic and semantic contribution, at the phrasal level. The mechanism by which this is achieved involves postulating an additional parameter to the function , namely an | feature which permits the information about the presence of the pronominal affix to be recorded and passed to the head which syntactically selects it.

. F   

.................................................................................................................................. In this chapter we have attempted to provide an overview of the general approach to morphology and morphological theory taken by researchers working within the frameworks of HPSG and LFG. We have seen that the two frameworks share the property of lexicalism and the clear separation of morphology from syntax. This means that there is flexibility within the two frameworks as to the theoretical treatment of the morphological component, although a number of researchers have addressed morphological questions of various types. In this chapter we have surveyed some of the key research in this area in each framework, but for reasons of space have not been able to do due justice to all relevant work in this general domain. For further interesting work on the relationship between morphology and syntax in LFG, see for example, Cho and Sells’ () work on case markers and verbal inflectional suffixes in Korean, Sells’ () discussion of multiple exponence in Swedish, and Otoguro’s () analysis of the interaction between inflectional and periphrastic tense/aspect/mood marking in Japanese. The morphosyntactic treatment of periphrasis is also taken up in a number of works by Ackerman (e.g. Ackerman ; Ackerman and Stump ; Ackerman, Stump, and Webelhuth ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



For other work on embedding morphological realizational functions in HPSG see also Kathol (). See Bonami and Samvelian () on a PFM-based approach to Persian inflectional periphrasis in HPSG. Sag () gives a brief exemplification of compounding and derivation in SBCG. For work on Sorani morphology, see Bonami and Samvelian (), Walther (), and Bonami and Crysmann (). Different HPSG analyses of the Polish data discussed in §... are given in Borsley () and Kupść and Tseng (). Müller () addresses the issue of using default inheritance to capture derivational morphology, arguing that the cost of such a move (which avoids the need for lexical rules, however encoded) is too high. In particular this paper gives a very clear statement of the ‘closure problem’ which arises in attempting to encode productive morphological processes directly in the type system. Bonami and Crysmann () provide an up-to-date overview of morphology in HPSG. Both LFG and HPSG have active research communities and annual conferences. We have no doubt that researchers working within these frameworks will continue to explore the nature of morphology and its representation, and we would encourage interested readers to keep an eye on their online proceedings for future developments: http://cslipublications. stanford.edu/LFG, and http://cslipublications.stanford.edu/HPSG. Comprehensive bibliographies of work within each framework are available at http://ling.uni-konstanz.de/pages/ home/lfg/index.html (for LFG) and http://hpsg.fu-berlin.de/HPSG-Bib (for HPSG).

A We would like to thank the editors for inviting us to contribute this chapter and for their guidance on how to reduce two complex theoretical frameworks into a limited amount of space. For helpful comments and suggestions which led to significant improvements in the final version we are grateful to Olivier Bonami, Berthold Crysmann, Stefan Müller, and an anonymous reviewer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

                ......................................................................................................................

 

. T   N M

.................................................................................................................................. N Morphology (NM) grew up in the late s after naturalness became an important issue discussed in several perspectives (see for instance Bruck, Fox, and La Galy ). Of particular relevance for the development of NM was David Stampe’s model of Natural Phonology (cf. Stampe , ; see also Donegan and Stampe ) which was taken up and extended to language change and to morpho-phonology as a special case of interaction between phonology and morphology by Dressler (, a) (see Luschützky  for a historical reconstruction). The foundational year is , when, during the LSA Summer Institute held at Salzburg, a common platform was formulated building on the three important contributions by Dressler (b), Mayerthaler (), and Wurzel (). The specific character of NM was first laid down in comprehensive terms in Mayerthaler (). This monograph has been highly influential because it contained probably the most ambitious attempt to qualify NM as an epistemologically well-founded theory, with postulates and corollaries providing sharp predictions on what language structure should look like. However, this first model of NM has been strongly criticized, because it proved inadequate to account for several different phenomena and fared particularly badly with language-specific exceptions. For this reason, Mayerthaler’s () book should be complemented by Wurzel’s (), in which a consistent theory is developed accounting for language-specific aspects relating to inflectional morphology. Dressler (a, a, b, ) proposes a general picture which is of a rather typological nature, developing an NM of word-formation processes, while Dressler (c) provides an in-depth analysis of the phonology–morphology interface from the perspective of Natural Phonology and NM. The views of these three authors are collected together in a book which outlines the main issues debated within NM (Dressler at al. ). Since the mid s, scholars interested in the issues developed within Natural Phonology and NM have regularly met at conferences—or workshops organized during general conferences. These conferences have given rise to miscellaneous volumes of proceedings: at Eisenstadt in  (Dressler and Tonelli ), at Krems in  (Méndez Dosuna and Pensado ), at Essen in  (Boretzky and Auer ), at Krems in  (Tonelli and

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



Dressler ), at Kraków in  (Dziubalska-Kołaczyk ), at Maribor in  (Boretzky et al. ) and in  (Teržan-Kopecky ), at Poznań in  (Dziubalska-Kołaczyk and Weckwerth ). For a (now somewhat dated) introduction to NM, the French reader is referred to Kilani-Schoch (), while a critical discussion of NM in English can be found in Carstairs-McCarthy (: ch. ) and Bauer (: ch. ). Short overviews of NM have been provided by Wurzel (a), Dressler (a, , ), and Luschützky ().

.. The functionalist nature of NM NM is couched within a broad functionalist framework: language exhibits on the one hand a communicative dimension, which makes reference to a (historically determined) social understanding of our linguistic interactions, and on the other a (panchronic) neurobiological rooting of our speakers’ capacities, which defines the physical potentialities and limits of our socio-communicative interactions (cf. Mayerthaler : ). In line with this functionalist view, NM strongly relies on external evidence, drawn especially from language acquisition and change, in contrast to the typical approach adopted by formalist models such as Generative Grammar which are primarily interested in the investigation of the speakers’ internal(ized) competence. In this connection, it rejects a purely formalist, deductive-nomological—in Hempel’s () sense—approach to language, and seeks to respond to the complexity of the empirical facts by making appeal to substantive principles like markedness, which is the counterpart of naturalness: “natural” is synonymous with cognitively simple, easily accessible (esp. to children), elementary and therefore universally preferred, i.e. derivable from human nature, or with the terms unmarked or rather less marked. (Dressler : )

This is made explicit through a theory of preferences rather than through strictly predictive laws or principles. In this light, naturalness does not refer to any global or overall constraint, but rather to a restricted number of naturalness parameters providing the basis of the universal theory of markedness. Such a preference theory has the advantage of making ‘local’ predictions along a given dimension which can be in conflict with what is favored by other preferences (cf. Vennemann ; Wurzel ). The resulting picture is highly dialectic in keeping with the dynamic nature of language.

.. The cognitive roots of NM The essential role played in NM by the concept of markedness in shaping the relation between form and content can be captured by what has been called by Givón () the meta-iconic markedness principle: “Categories that are cognitively marked—i.e. complex— tend to also be structurally marked.” This implies that NM defends a strongly anti-separatist view of the relation between content and its formal, overt expression. This view stands neatly in contrast with the opposite “arbitrarist” view espoused by many actual morphological frameworks in which content is taken to be separate and independent from its formal expression (cf. Aronoff ; Beard ; Stump ). In the separatist view, no prediction

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

can be made with regard to the formal complexity of the word as the mirror of cognitive complexity. In the same vein, NM defends a strongly incremental approach to morphology insofar as it assumes that content is encoded in a preferably overt way, “incrementing” the formal substance of the base word. The opposite view, defended for instance by Stump () and Baerman, Brown, and Corbett (), maintains that content is encoded “realizationally”, that is, via its direct association with the root which licenses its formal, overt realization. Furthermore, NM is output-oriented in the sense that the single concrete forms are evaluated according to naturalness parameters and universal markedness theory. In this way, both the anti-separatist and the incremental view are not assumed a priori but concretely result from the empirical investigation of language after language. In this regard, it is important to stress that this effort of empirical investigation has to be carried out on a morphological system as a whole and is not falsified by single cases relating to subparts of it. NM makes large use of the concept of prototype as developed by cognitive psychology (cf. Dressler a). Accordingly, NM treats the categories and the components of language as characterized by fuzzy boundaries rather than as strictly separated and only associated by correspondence rules (see for instance Ackema and Neeleman :  for such a view). In this regard, inflection and word-formation must be treated in prototypical terms (cf. Dressler ; Wurzel ) as the two poles of a continuum containing in-between cases, such as for instance diminutives (and more generally evaluative morphology), which represent non-prototypical word-formation (cf. Dressler a; Dressler and Merlini Barbaresi a; Noccetti et al. ) and participles, which are an instance of nonprototypical inflection inasmuch as they often change word class and/or approximate the behavior of agent nouns (cf. Haspelmath ; Kerge ).

.. Naturalness at the different levels of linguistic analysis The stress on naturalness has fostered research carried out especially in natural(istic) speech contexts, for instance in connection with language acquisition or impairment, and on transitional areas of grammar traditionally marginalized in the theoretical debate. These interests characterize the research program developed especially by Dressler, who has largely investigated language acquisition with a focus on the early stages of pre- and proto-morphology (cf. Dressler and Karpf ; Voeikova and Dressler ), aphasia (Dressler and Denes ), language decay and death (cf. Dressler b, a), the phonology–morphology interface (cf. Dressler c), the morphology–pragmatics interface (cf. Dressler and Merlini Barbaresi b), the concept of submorpheme (cf. Dressler b), and the so-called extra-grammatical morphology (cf. Dressler b), as well as other concrete instantiations of our language faculty, such as for instance text linguistics, especially with regard to the role of word-formation (cf. Dressler c, b).

. T    NM

.................................................................................................................................. The basic tenets of NM are tightly related to Charles S. Peirce’s semiotics. In this regard, naturalness as a concept has been wandering around in linguistics at least since

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



Roman Jakobson () and his reflections on ordo naturalis, which were strictly interwoven with Peirce’s concept of iconicity. In Jakobson’s view, the ordo naturalis reflecting the real sequence of events as in Caesar’s famous sentence Veni, vidi, vici is “iconic” in Peirce’s sense and accordingly unmarked. In fact, in Peirce’s taxonomy of the three types of sign— indices, icons, and symbols—icons are the most important and widespread. An iconic relation consists in a direct form–meaning correspondence whereby the signans immediately refers to the signatum by reflecting its concrete properties in the shape, while in the case of indices the relation between signans and signatum is of proximity (a “factual connection”) and symbols are purely conventional (for the relation between symbols and icons, see Gaeta a). Therefore, “[t]he only way of directly communicating an idea is by means of an icon; and every indirect method of communicating an idea must depend for its establishment upon the use of an icon” (Peirce : .).

.. Iconicity and semiotic parameters of naturalness Among the three different types of icons, namely diagrams, images, and metaphors, a central role is played in NM by diagrams, “which represent the relations, mainly dyadic, or so regarded, of the parts of one thing by analogous relations in their own parts” (Peirce : .). This stands at the heart of the Principle of Constructional Iconicity (cf. Mayerthaler : ) requiring that more meaning should correspond to more form. In other words, morphologically complex words are preferably diagrams. Given the primary status of words as signs with respect to affixes which are secondary signs because—following Peirce—the latter are “signs on signs”, they are also preferably selected as bases of derivation (cf. Dressler ). Diagrammaticity can be intended generally as a preference for affixation over conversion over subtraction (cf. Dressler : ). In semiotic terms, this results in an increasing degree of markedness because only affixation gives rise to iconic signs, while conversion (or zero-derivation, cf. Gaeta  for a recent discussion of the issue) is noniconic and subtraction anti-iconic (cf. Dressler , c). On this base, affixation is predicted to be more widespread than the other two. As a matter of fact, no language seems to totally dispense with affixation in favor of pure conversion or, even worse, subtraction. The latter in particular is further predicted to be generally absent or eventually recessive and replaced diachronically by more iconic coding. For instance, the subtractive coding of noun plural found in Franconian German, e.g. hond / hon ‘dog(s)’, is observed to have become unproductive and to lose items in favor of the more diagrammatic additive coding (cf. Dressler : ). This is not to deny that there can be deviations from affixation as a preferred strategy for morphological coding. However, the latter do not undermine the general preference for iconic coding (see, however, §.. below). Another facet of diagrammaticity is expressed by biuniqueness, which is defined as the preference for a uniform relation between form and meaning. This is due to the fact that “[p]erception (and processing by the receiver) of a signans which uniquely represents a signatum B (uniqueness or biuniqueness) is easiest, for it does not impede semiotic transparency at all” (Dressler : ). Constructional iconicity and biuniqueness provide the basis for a concrete evaluation of morphologically complex words. They are both

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

represented as dyadic relations in which diagrammaticity emphasizes the recoverability of the base–affix (syntagmatic) relation (a) while biuniqueness focuses on its paradigmatic counterpart (b) (cf. Dressler : ): ()

a. (A + B)  (a + b) b. (A  a) + (B  b)

(|| + ||)  ( garden-er) (||  garden) + (||  -er)

While constructional iconicity is further spelled out along the two dimensions of morphotactic and morphosemantic transparency, to which we will return in §., the preference for biuniqueness is expressed by what Vennemann (: ) has called Humboldt’s Universal: “Suppletion is undesirable, uniformity of linguistic symbolization is desirable: Both roots and grammatical markers should be unique and constant.” By enhancing the paradigmatic recoverability of base and affix, this principle lies at the heart of analogical change (cf. Gaeta , and  for a recent survey). Clearly, the general validity of the principle is subordinated to severe constraints of a system-specific nature (for instance, the type and number of inflectional classes occurring in a language as discussed in §.. below; or the stratification of the lexicon distinguishing native from non-native morphemes, in which affixes belonging to either strata select their homogeneous lexical bases, e.g. gardener vs. typist), or of a more general type, as for instance the effect of economy in restricting an excessive degree of form–meaning uniformity (on the relation between economy and naturalness, cf. Wurzel ; Gaeta ). One of these effects can be seen in so-called lexical blocking preventing the formation of a derivative in the presence of an already extant lexeme displaying a similar meaning as in the well-known case thief vs. *stealer (cf. Wurzel ; Gaeta b for a recent survey), which can be considered a case of paradigmatic suppletion. It is important to stress that biuniqueness can come into conflict with diagrammaticity insofar as the latter requires a violation of the former when a hyper-characterized plural form such as feets is created in children’s or learners’ varieties by adding a suffix to an already marked plural (cf. Dressler : ). Besides highlighting the dialectic nature of the naturalness parameters to which I will return in §.., this case also illustrates a general tendency because diagrammaticity seems normally to prevail over biuniqueness when they come into conflict. This provides the base for a universal hierarchy among the different parameters shaping naturalness. However, such a hierarchy must be thought of as the sum of preferences rather than as the explicit language-specific order of a number of constraints as commonly assumed in theoretical models like Optimality Theory (cf. McCarthy : –; Carstairs-McCarthy ; and Donegan  for a comparison of the two approaches). Far less significant, although often emphasized as a peculiar trait of natural languages, is the occurrence of the second type of icons, namely the images whose signans directly reflects the signatum, which in linguistic terms can be understood in connection with the phenomenon of sound symbolism. While it is only marginally exploited in morphology as a whole, sound symbolism is highly relevant for those instances of word-formation characterizing the early stages of acquisition that are labeled as pre- or proto-morphological, especially with regard to the abundant usage of diminutives and hypocoristics (cf. Dressler b). This is due to the important role played by evaluative morphology in children’s early developmental stages attuned along the social function of accommodation, which favors a smooth interaction among the members of a speakers’ community (cf. Gaeta c

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



for a survey). As long as the process of language acquisition proceeds, the importance of images is reduced while the role played by diagrams increases in relevance because the latter can be used to encode more complex content in correspondence with their more abstract nature of sign. Finally, metaphors which are a type of icon “representing a parallelism in something else” (Peirce : .) play a marginal role in morphology, although it has been suggested that typical instantiations of non-iconic coding such as conversions might in fact be interpreted as morphological metaphors (cf. Crocco Galèas ). In contrast with the low relevance of images and metaphors in morphological coding, the parameter of indexicality also plays an important role in shaping morphological naturalness. Indexicality, which is connected with Peirce’s (: .) index defined as “signs which are rendered such principally by an actual connection with their objects”, refers to the formal distance between signans and signatum and suggests that morphological coding implying a lower degree of distance is strongly preferred. Accordingly, because of its proximity with the stem, prefixal and suffixal coding, and more in general concatenative morphology, is cross-linguistically preferred over conversion while infixation and nonconcatenative morphology which “produce the closest connection” (Dressler : ) with the stem are expected to be highly favored by indexicality but stand in an antagonistic relation with diagrammaticity. This conflict is solved typologically insofar as, for instance, in the Indo-European languages non-concatenative coding like verb ablaut can generally be shown to be recessive and diachronically replaced by pre- or suffixal coding. This, in turn, lends support to the preference for diagrammaticity, while this is apparently not the case in Semitic (I will come back to this point in §.. below).

.. Cognitive endowment and universal parameters of naturalness Two further parameters shaping morphological naturalness come from general properties underlying our cognitive endowment (cf. Dressler a). In particular, the Gestalt principle of figure–ground regulating our perceptive faculty is responsible for the preference for binarity in morphological coding. The effects of binarity are visible in the subordinated role manifested by circumfixation which apparently only occurs when pre- and suffixations are also present in a language. Moreover, compounds are overwhelmingly to be interpreted in terms of (preferably head-modifier) binary relations, while ternary compounds are strongly limited to some type of coordinative constructions such as green-white-red flag (cf. Dressler ). The cognitive primacy of the figure–ground relation also supports the preference for an optimal word shape consisting of a bi-syllabic or tri-syllabic foot in which one prosodically prominent syllable—the figure—is followed by one or two unstressed syllables providing the ground (cf. Dressler b).

.. Conflicting levels of adequacy The impact of semiotic and more in general cognitive principles on morphological systems can be operationalized by means of the handful of preferences discussed above: (i) diagrammaticity

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

(transparency); (ii) biuniqueness (uniform coding); (iii) indexicality (proximity); (iv) binarity (head–modifier relations); (v) optimal word shape (bi-syllabic foot). This set of preferences which are grounded in our cognitive abilities of ‘semiotic animals’ is meant to explain the strategy followed by morphology as a specific component of natural languages. In this sense, NM, as pointed out by Bauer (: ), “is concerned with providing a partial explanation for patterns of morphological behaviour” inasmuch as it “deals with substantive universals such as the range of possible morphological patterns and the categories that are necessary in morphology”. It must be added that natural languages, as (socio-)historically determined entities, generally result from the interaction of five different levels of adequacy which are intrinsically in conflict with each other (cf. Dressler b; : –). Accordingly, besides the level (i) of universal preferences sketched so far which profile the level of naturalness adequacy, morphological systems are shaped via the level (ii) of typological adequacy, in which Skalička’s () five language types—fusional, introflectional, agglutinating, polysynthetic, and isolating—are “the particular choice of very natural options from some parameters and of rather unnatural (or marked) options from other parameters of naturalness” (Dressler a: ). In this sense, they function as ideal ‘poles of attraction’ because they respond to the satisfaction of particular architectural requirements. For instance, in spite of the little degree of diagrammaticity expressed by scarce biuniqueness and morphotactic transparency, the introflecting type displays a number of advantages: the occurrence of short words / word forms approximating the optimal word shape, a high degree of indexicality expressed by fixed morpheme structure conditions in which consonants signal lexical roots and vowels encode morphological information, and finally a strong internal cohesion of the morphological paradigms because of the high degree of uniqueness of the root alternations (cf. Dressler : ). Thus, in Modern Hebrew even loanwords borrowed from languages with concatenative morphology are integrated into the non-concatenative type as in the case of Pasteur, which is treated as a quadriconsonantal root /pstr/: pister ‘has pastorized’, pistur ‘pastorization’, etc. (cf. Dressler a: ). Through the level of typological adequacy, the subsequent level (iii) of language-specific normalcy is reached which responds to particular principles of system-dependent naturalness (cf. Wurzel ). As will be considered in §. below, at this level the relevant language-specific options are those which decide the concrete instantiation of the preferences established at levels (i) and (ii), such as the base-format (root, stem, etc.) or the affix type normally preferred in a language. Language-specific normalcy is also shaped via level (iv) relating to the (sociolinguistic) norms developed within a speakers’ community (cf. Coseriu ) that are subsequently actualized on the level of (v) concrete performance of the speech act. This last level is crucial because it directly influences the universal preferences which are meant to capture the real essence of naturalness for linguistic structure. It is in fact at the level of performance that the dialectic among the several conflicting preferences expressed by the speakers-listeners takes place. The latter shape the linguistic signal and are at the heart of the variation giving rise to the ‘internal’, that is, grammatically initiated, language change (cf. Wurzel b for the distinction between grammatically initiated vs. extragrammatically initiated language change). In sum, with regard to alternative and competing models of morphology, NM is characterized (i) by the conflicting nature of the relations among the components of

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



language and (ii) by the ‘dynamic’ nature of its conception. These two aspects are strictly interwoven in the sense that the dynamic nature of the single components leads to the optimization of the single components through the instantiation of the specific preferences. At the same time, however, conflicts unavoidably arise as a consequence of the different and partially opposed teleology of the preferences instantiated by the single components and also within the same component. As a matter of fact, any component can be considered dominant or basic with respect to the others. Therefore, it is quite difficult to imagine a global optimization of the linguistic structure resulting in an overall ‘natural’ linguistic system.

. T   NM

.................................................................................................................................. Because of the conflicting nature of the different preferences, the attempt to find out how natural a morphological system can be might appear at first sight illusive. In fact, one major criticism raised against NM focuses on the fact that its “aim is to provide a theory of languages rather than a theory of grammars” (Spencer : ). This makes it difficult, on the one hand, to figure out how the morphology of a natural language should concretely be shaped: NM “is not too concerned to provide hard-and-fast criteria for distinguishing the different sorts of rule” (Spencer : ). On the other hand, it is not straightforward to implement concretely the predictions issued by NM in a given morphological system beyond sporadic observations on the effect of the natural preferences for the benefit of a certain class of words or of derivatives. While I will come back to the discussion of the former observation in §.. below, let us consider in the next section one concrete issue aimed at verifying the reliability of NM as a theory of morphology.

.. Scales of transparency In this regard, I will discuss what is probably one of the most relevant achievements of NM in the realm of word-formation, namely the role of morphotactic—which will be especially focused on here—and morphosemantic transparency for word-formation rules, especially in its repercussion on productivity, as observed by Bauer (): naturalness increases productivity because only if a morphological process is maximally natural is it maximally analysable and maximally computable. That is, the more natural a morphological process is, the more likely it is that forms using it will be readily understood and will be produced with ease by speakers. (Bauer : )

In fact, word-formation is likely to be more liberal in allowing the universal preferences to act in an unconstrained or less constrained way than inflection which is more likely to be subject to system adequacy, as will be discussed in §.. This is because the main function of wordformation is to contribute to lexical enrichment, that is, to label new concepts and expand our lexicon (cf. Kastovsky ), while it only secondarily serves the other function of enhancing text cohesion, which is best performed by inflection, for instance via agreement. On the other

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

hand, text cohesion is served by word-formation by means of typically transpositional techniques such as for example nominalization or by means of compounding (cf. Dressler and Mörth ). This may be particularly relevant in specific text sorts produced by homogeneous communities of speakers for which a cohesive text is evaluated as highly positive (cf. Gaeta c). The link to text linguistics is thus of paramount importance for understanding the multifunctional relevance of word-formation for the speakers’ behavior at the fifth level mentioned in §.., the concrete performance of the speech act (cf. Dressler b). As for transparency, constructional iconicity is concretely spelled out with the help of a scale of morphotactic naturalness which entails that the productivity of a morphological technique decreases with the increase of morphotactic opacity involving different degrees of allomorphy due to the intervention of phonological, morpho-phonological, and allomorphic-morphological rules (respectively PRs, MPRs, and AMRs) (Table .). Diagrammaticity, namely the recoverability of the base–suffix relation, is increasingly endangered as long as the degree of opacity grows as a consequence of the intervention of a phonological rule, such as, for instance, the resyllabification found in exist$ ! exis$tence where a syllable boundary is inserted within the lexical base with regard to the more transparent excite$ ! excite$ment. Opacity is further increased by the occurrence of morpho-phonological alternations such as the palatalization of electri[k] ! electri[s]ity and its opaquer variant involving the disappearance of the triggering segment conclu[d] ! conclu[ʒ]on, which are suffix-specific insofar as they do not show up, for instance, in boo[k] ! boo[k]ish and Ovi[d] ! Ovi[d]ian, and even more by the allomorphic-morphological alternations in which no phonological motivation can be reconstructed as in dec[a]de ! dec[]sion. These are quite close to the weak suppletion found in the extended stem of child!childr-en, while the last degree of strong suppletion is completely idiosyncratic. The prediction drawn from this scale is that word-formation processes involving alternations placed at the highest degrees of the scale are to be preferred over those involving alternations placed at lower degrees. As the philosophy of this as well as of other scales and preferences focuses on the speakers’ concrete behavior in socio-communicative contexts rather than on system-related inventories of morphological processes or rules, the preference entails the prediction that morphological patterns (of either an inflectional or derivational nature) displaying a lower degree of opacity are expected to be more natural and accordingly more productive, easier to acquire, more resistant to change, later to lose than patterns displaying a higher degree of

Table .. Scale of morphotactic naturalness Degree

Operating rules

Examples

I II III IV V VI VII VIII

Intrinsic PRs PRs, e.g. resyllabification Neutralizing PRs, e.g. flapping MPRs without fusion, e.g. velar softening MPRs with fusion AMRs, e.g. Great Vowel Shift Weak suppletion Strong suppletion, e.g. root alternation

excite excite$ + ment exis$t + ence exist ri[ɾ] + er ride electri[s] + ity electric conclu[ʒ]on conclude dec[]sion dec[a]de childr + en child be, am, are, is, was

Source: cf. Dressler a: –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



opacity because the latter impose a stronger cognitive burden on speakers, at least from the viewpoint of the preference for diagrammaticity. The focus on the processing dimension finds a nice parallel in recent research on probabilistic linguistics, in which predictions on productivity are drawn from the salience of the segmental cluster with reference to its parsability, namely the ease of detection of a morphological boundary on the basis of its phonotactics (cf. Hay ). Accordingly, opaque alternations render this process of detection more difficult because of the reductive effect of fusion.

.. Morphotactic transparency and naturalness To see how the predictions on morphotactic transparency can be concretely implemented within a given morphological system, let us briefly discuss the case of Italian action nouns, which will also highlight the conflict between the first and the third level of adequacy mentioned in §... They present an interesting case of suffix rivalry, because at least three different suffixes displaying similar problems of allomorphy are in competition, namely ‑(z)ione, ‑(t)ura, and ‑mento (cf. Gaeta b: –; Gaeta and Ricca  for detailed discussions). Here, the focus will be on the first suffix ‑(z)ione although the others display similar properties. In particular, the allomorphy is partly accounted for by a specific base form of the Italian verb such as the Verbal Theme (= VT) consisting of the stem plus the thematic vowel combined with the suffix ‑zione (a). This analysis competes in several cases with another one in which the suffix takes the form ‑ione and the base consists of the Past Participle (= PP), possibly accompanied by an additional rule of affrication changing the expected  fondatione to the actual fondazione, while in other cases only the analysis based on the PP is possible (b). Partly, the allomorphy is completely opaque in Italian insofar as the base form is given by the Latin Past Participle (= LatPP) (c), that is, an additional allomorphy justified only in paradigmatic terms, combined with the suffix ‑ione. Finally, another allomorphic type involves the verbal Stem deprived of the thematic vowel plus the suffix ‑ione (d): () a. fondazione ‘foundation’ spedizione ‘shipment’ b. assunzione ‘employment’ delusione ‘disappointment’ c. adesione ‘adhesion’ emissione ‘emission’ d. recensione ‘review’ opzione ‘option’

VT PP  [[fonda]VT -zione] [[fondat]PP -ione]  [[spedi]VT -zione] [[spedit]PP -ione] *[[assumi]VT -zione]  [[assunt]PP -ione] *[[deludi]VT -zione] [[delus]PP -ione] *[[aderi]VT -zione] *[[aderit]PP -ione] *[[emetti]VT -zione] *[[emess]PP -ione] *[[recensi]VT -zione] *[[recensit]PP -ione] *[[opta]VT -zione] *[[optat]PP -ione]

LatPP/Stem

[[ades]LatPP -ione] [[emiss]LatPP -ione] [[recens]Stem -ione]  [[opt]Stem -ione]

Notice that besides the opaque alternation of a suppletive nature in (c) given by the LatPP allomorphy, which has to be taken into account either way, the other three options based respectively on the VT, on the PP, or on the Stem are in principle equally viable candidates to account for the derivational process. Given the scarce productivity of (d), in practice only the first two alternatives have been defended in the literature respectively by Thornton (: –) and Scalise (: ). Comparatively, the analysis based on the PP plus ‑ione accounts for a larger number of cases because it covers both the derivatives in (a) and in (b),

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

while that based on the VT plus ‑zione is unable to account for the derivatives in (b). For this reason, the analysis based on the PP has often been considered superior in spite of the fact that it requires an additional rule of affrication, which is clearly of a morphophonological nature as it is not found before other suffixes: argento ‘silver’ ! argent-iere ‘silversmith’, Evita (Perón) ! evit-iano ‘Evita’s’, etc. In Scalise’s view, the maximization of the number of derivatives accounted for by the rule takes priority over its unnaturalness resulting from the MPR of affrication. In contrast with this view, the parameter of morphotactic transparency predicts that the analysis based on the VT should be preferred because it does not involve any additional opacity after the addition of the consonantinitial suffix ‑zione. This preference should be reflected by a significant difference in terms of the productivity of the derivatives in (a) which are expected to display significantly different properties with regard to the derivatives in (b). Furthermore, the latter are expected to behave similarly to the derivatives found in (c), which are clearly based on weak suppletion, and to the derivatives found in (d), which also partially display affrication. These expectations have been tested on the basis of a large text corpus consisting of about  million tokens extracted from three years of the Italian newspaper La Stampa (–). In order to assess the different alternatives, the results have been mapped onto a simplified version of Dressler’s scale containing a first degree of PRs, a second degree of MPRs, a third degree of AMRs, a fourth degree of Suppletion (Suppl), and a fifth degree of further irregular allomorphies. On the left side of Table . the derivatives are evaluated on the basis of the PP-ione pattern taken as an input, while the right side shows the alternative analysis based on the VT-zione pattern. Table . reports the number of tokens (= N), of types (= V), the average frequency of the types ((N/V)·2) and the number of hapax legomena (= h) found in the corpus. The last figure has been shown to be a generally reliable estimate of the productivity intended as

Table .. Derivatives formed with ‑(z)ione in the La Stampa corpus (–) Base: PP + -ione

Base: VT + -zione

Degree

N

V

(N/V)·2

h

N

V

(N/V)·2

h

I. PRs

, .%

 .%

.

 .%

, .%

 .%

.

 .%

II. MPRs

, .%

 .%

.

 .%

— —

— —



— —

III. AMRs

, .%

 .%

.

 .%

, .%

 .%

.

 .%

IV. Suppl

, .%

 .%

.

 .%

, .%

 .%

.

 .%

V. Others

, .%

 .%

.

 .%

, .%

 .%

.

 .%

.

 .%

,, .%

 .%

.

 .%

Tot.

,,  .% .%

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



the availability of a certain word-formation rule in a given time span (cf. Baayen ; Gaeta and Ricca ). As can be gathered from a comparison between the two parts of the table, the analysis based on the PP scores quite badly in terms of diagrammaticity: only a few derivatives are placed at the first degree, while a larger amount is found at the second degree because of the affrication rule. Moreover, the third degree basically contains the stem-based formations of (d), while the fourth degree houses the weakly suppletive derivatives based on the LatPP of (c). For these latter groups, similar figures are found in both parts of the table. The first expectation on the relation between morphotactic transparency and overall productivity is clearly evidenced by the analysis based on the VT in which a strong correlation is observed between the first degree of the PRs and the high number of h, about  percent of the total derivatives, and of the types, about  percent. In the alternative analysis based on the PP this nice correlation is completely lost, as only  percent of h and about  percent of the types are found at the first degree. Nonetheless, one might argue that the benefit brought by a larger coverage of the analysis PP-ione compensates for the reduced degree of morphotactic transparency because the whole number of derivatives is formed by means of one suffix ‑ione, while the alternative analysis has to distinguish between the derivatives found under the PRs which select the form ‑zione from those found under the AMRs and the Suppl which select the form ‑ione. In this regard, the second expectation is helpful, whereby the derivatives found in (b) are expected to behave similarly to the derivatives found in (c), which are clearly based on weak suppletion, and to the derivatives found in (d), which also partially display affrication. In fact, a significant difference can be observed on the left side of Table . between the derivatives found at the first degree of the analysis based on the PP (i.e. the type delusione in (b) without affrication) and those found at the second degree (i.e. the types (a) and (b) with affrication). While the latter display a high number of V and h with a comparatively smaller N/V average relation, the first-degree derivatives are rather similar to those found in the fourth degree (i.e. type (c)) with a low productivity measured in terms of h, a smaller V number spread on a higher N value, and accordingly with a large N/V average relation. The N/V relation measures the frequency of the derivatives plotted on their numerosity: a high value typically mirrors a scarcely productive word-formation rule which is represented by a restricted amount of fairly entrenched types. On the other hand, a low N/V is generally found when a highly productive word-formation rule gives rise to a large number of types which are, however, on average scarcely frequent, that is, not stabilized in the lexicon. The former state of affairs corresponds to what Miklos Dokulil () understands by Wortgebildetheit, that is, word-formedness or analyzability (cf. Bauer : ), and photographs a static situation, typically involving lexical material of learned or foreign origin. The latter, however, represents Dokulil’s Wortbildung, that is, word-formation stricto sensu which is responsible for the dynamic part of the lexicon, namely the true lexical enrichment. While the analysis based on the PP does not distinguish between the static and the dynamic parts of the rule, the analysis based on the VT nicely expresses this difference: the derivatives found in the third and fourth degree of the scale of the right part of Table . behave quite similarly in terms of low productivity and high average frequency. Recall that in this case the third degree gathers together the derivatives of the (b) and (d) types.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

.. The dynamic dimension in morphology and natural language change This case illustrates the main points of interest of NM: the effects of the universal preferences manifest themselves in the behavioral properties of the derivatives. These are of relevance for our analysis much more than structure-internal considerations such as the maximization of the scope of a rule. The latter runs the risk of projecting a picture which reflects the static aspects of the lexicon without really highlighting its dynamic parts. The latter also involve predictions relating to the ontogenetic (i.e. language acquisition) and to the phylogenetic dimension (language change). For instance, to stick to the example of –zione discussed in §.., the PP is systematically avoided as a derivational base in a certain number of derivatives while the VT is preferred, as for instance in apparire ‘to appear’ ! apparizione ‘apparition’ (PP apparso, but *apparsione), etc. This means that, while PP-based derivatives such as delusione in (b) basically go back to their Latin ancestors and cannot be said to be formed in Italian, truly Italian coinages only display the VT-pattern such as apparizione, which has been expanded in the diachronic development of Italian for this as well as for the other suffixes in competition with ‑zione such as ‑tura: cuocere (PP cotto) ‘to cook’ ! cottura / cuocitura ‘cooking’, rompere (PP rotto) ‘to break’ ! rottura / rompitura ‘break’, scoprire (PP scoperto) ‘to discover’ ! scopritura / *scopertura ‘discovery’, etc. In sum, the scale of morphotactic transparency is able to capture the diachronic reallocation of the old rule selecting the PP as a base (originally coming from Latin) to the new VT-based format. The increased naturalness of the system of Italian action nouns is, however, strongly disturbed by the occurrence of a large number of derivatives placed at lower degrees of the scale and sustained by a high token frequency. Thus, since the gain in terms of morphotactic transparency is counter-balanced by a heavy lexical burden, one cannot regard the resulting system as more natural as a whole. Rather, the dialectic essence of linguistic systems as envisaged by NM suggests that an improvement at a certain point may increase the unnaturalness of other aspects of the system, which may well undergo further changes in an endless cycle. What is at the heart of this perennial cycle is the idea common to certain functionalist circles (e.g. see Vennemann ) that language change involves a language improvement: Natural language change always takes a direction such that it seeks to replace grammatical phenomena which are more marked with respect to a markedness parameter Mi by grammatical phenomena which are less marked with respect to the markedness parameter Mi. (Wurzel : )

In its essence, this idea displays the character of an if/then conditional statement: if an arguably natural change takes place, then it consists in the reduction of markedness (cf. Wurzel b: ). Thus, markedness reduction is the ultimate goal of language change and contributes in a complementary way to the increase of naturalness. In the next section, this idea will be discussed with regard to so-called system-dependent naturalness.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



. S- 

.................................................................................................................................. As shown by the Italian example discussed in §., universal tendencies as represented by the scale of morphotactic transparency are strictly interwoven with language-specific traits as required by the base selection properties of Italian action nouns. This latter aspect is captured by the concept of system adequacy (or congruity) developed by Wurzel (). Although this is generally meant to deal with inflectional morphology, it clearly has consequences at the word-formation level as well, because of the close relation between the two domains.

.. System adequacy and markedness reduction System adequacy reflects the fact that a morphological system is organized around its own properties summarized in a restricted list of parameters consisting in (i) the type and number of the occurring categories (e.g. number and case for nouns etc.); (ii) the type of morphological markers classified on the basis of their formal properties (affix type, separate or cumulative exponence, degree of syncretism, relevance of the word-internal articulation in terms of root, stem, etc.); and finally (iii) the presence or absence of inflectional classes (= ICs). The values fixed for certain parameters (e.g. four cases and two numbers for German nouns, VT-based derivation for Italian action nouns, etc.) constitute the systemdefining structural properties (= SDSPs) of a given morphological system. The SDSPs emerge inductively from the way morphological meaning is concretely realized by means of morphological forms: Their status is neither that of grammatical rules (they represent overriding structural features), nor of grammatical universals (they differ from language to language) but rather that of generalizations of the morphological forms and rules of the respective language made by the speakers of a language. (Wurzel : )

In spite of their inductive emergence, the SDSPs are of crucial importance in order to understand how morphological systems evolve along the diachronic dimension exploiting their own internal resources, and in fact they provide the classificatory matrix against which system adequacy is measured. In this regard, the SDSPs act as a system-stabilizing force insofar as they allow us to identify the trend towards the uniformity of the morphological system that is unveiled by those morphological phenomena which are eliminated because they arguably violate system adequacy. This happens especially when the SDSPs are organized in a non-uniform way, as it often turns out to be the case. To give one concrete example, early Old High German (= OHG) displayed two distinct inflectional types for neuters which encoded number in a non-uniform way. In the first type number was encoded in an iconic way by adding a suffix to the word stem as in faʒ / faʒ-u ‘barrel(s)’, herza / herz-un ‘heart(s)’, lamb / lemb-ir ‘lamb(s)’, which can be summarized by the SDSP1: [Sg. 6¼ Pl.]. The second type displayed a

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

non-iconic zero plural as in wort / wort ‘word(s)’, expressed by the SDSP2: [Sg. = Pl.]. In spite of its non-iconic nature, this latter type covers about three-quarters of all neuters and therefore the SDSP2 qualifies as the normal or system-adequate variant. Accordingly, we can account for its spread to the rest of the neuters testified by later forms like faʒ ‘barrels’, herza ‘hearts’ and lamb ‘lambs’ as an improvement of the system adequacy at the expense of a non-system-adequate trait. Thus, the SDSP2 which is quantitatively dominant, defines system adequacy, and at the same time makes us foresee the direction which is likely to be followed by the trend towards uniformity. The improvement of system adequacy brings about the reduction of the markedness of the system insofar as a non-normal trait is eliminated, although markedness reduction has to be understood here in rather different terms with regard to the universal perspective adopted in §.. However, system adequacy, as well as its predictive force, cannot be interpreted in deterministic terms. On the one hand, it does not allow us to make strict predictions on the possible elimination of a certain non-system-adequate trait, and in fact the decomposition of non-uniformly structured morphological systems can last for centuries and is often accomplished by means of several alternative solutions (in this regard, cf. Gaeta c on the development of the system of the OHG preterite-presents). On the other, system adequacy and its predictive force is not necessarily violated when a non-system-adequate trait emerges in a given morphological constellation, because the latter can be due to a different reason, for instance of a phonological nature.

.. The role of paradigms in markedness reduction Morphological systems which display a large number of ICs tend to develop an internal organization based on paradigm structure conditions (= PSCs) which allow the speaker to keep the morphological complexity under control: By establishing implicative relations between inflectional forms of words, paradigm structure conditions not only cover the matching of forms in a uniform paradigm but also fix the different status of its individual forms; they distinguish between the “identifying forms” . . . along with the lexical basic form of a word and the other forms of the paradigm following from it (which is properly the distinction between implying and implied forms). This takes account of the fact that an inflectional paradigm is more than the sum of its forms, that it has a specific internal structure. (Wurzel : )

To understand what it concretely means to say that a paradigm is not simply the inventory of its word forms, let us discuss a case in which several ICs are in competition as, for example, Latin nouns belonging to the third declensional class (cf. Wurzel : –). Here, two implicational series can be established on the basis of the IC of the noun puppis ‘afterdeck’ which stands for the nouns whose stem displays the i-vowel (a) and of the IC of the noun rēx ‘king’ which stands for the nouns whose stem ends with a consonant (b):

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



() a. ½-im=Acc:Sg:  ½-ī=Abl:Sg:  ½-īs=Acc:Pl:  ½-ium=Gen:Pl: 9 8 ½-is=Gen:Sg: > > = <  ⋮ > > ; : ½-ibus=Dat:Abl:Pl: b: ½-um=Gen:Pl:  ½-ēs=Acc:Pl:  ½-e=Abl:Sg:  ½-em=Acc:Sg: 9 8 > = < ½-is=Gen:Sg: >  ⋮ > > ; : -ibus=Dat:Abl:Pl: The logic of the PSCs is that when a speaker hears the word forms puppim (accusative singular) or rēgum (genitive plural) he or she is able to reconstruct respectively the forms puppī (ablative singular) and rēgēs (accusative plural), but not vice versa, and step by step their entire paradigm which is partially identical insofar as the last step yields the same forms for both ICs. The systemic force of such intra-paradigmatic relations consists in a hierarchical organization of the single word forms which have a different relevance for the whole inventory. Such different relevance is expressed by the implications and by their effect on the dynamics of the ICs. In fact, puppis and rēx are surrounded by at least three mixed ICs which display inflectional properties which go back to either of them. The PSCs in () allow us to describe the three mixed classes as resulting from the hierarchy of implications in which a certain noun may ‘join in’ at any point: () a. ignis ‘fire’

8
Löwe, etc. The disparate set of nouns found in OHG was subsequently reorganized by slowly eliminating the nouns which did not match the extra-morphological properties. This brought about a number of changes of IC coupled with changes in the extra-morphological properties of the nouns. Accordingly, a number of nouns passed to the IC of feminines, such as Glocke ‘bell’, Zunge ‘tongue’, etc., by changing their gender as in the case of bluomo > Blume, fano > Fahne, etc. Furthermore, other nouns changed their extra-morphological properties with the addition of a final -n: bogo > Bogen, mago > Magen and at the same time modified their inflectional features passing to the IC of masculines like Boden ‘ground’, Faden ‘thread’, etc. The latter go back to the OHG class of the a-nouns and actually display s-genitive and umlauted plural: Bodens / Böden, Fadens / Fäden, etc. While in modern German nouns like Bogen and

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



Magen have passed to this latter IC: Bogens / Bögen, Magens / Mägen, other nouns like Funke have acquired the new properties only partially insofar as both the n-ending and the inflectional properties are not completely matched: the nominative singular swings between Funke and Funken, while the genitive singular displays the form Funkens but the plural is Funken. Conversely, other masculine nouns which did not originally belong to the boto-class have entered the class of weak masculines because they happened to display the extramorphological properties of ‘animacy’, such as Hirt ‘shepherd’, going back to the OHG ja-class: hirti / hirtes ‘Gen. Sg.’ / hirta ‘Nom. Pl.’, which has developed a nominative singular Hirte and the form Hirten for the genitive singular and the nominative plural similarly to boto > Bote / Boten. Clearly, such processes of reorganization can last for centuries, and we still observe variation in cases such as Friede / Frieden ‘peace’, Glaube / Glauben ‘belief ’, etc. However, the actual productivity of the weak masculine class is shown by new formations matching the property of animacy, for example Chaot ‘slob’ / Chaoten ‘Gen. Sg. / Nom. Pl.’, and by a pair like Typ1 ‘type’ / Typs ‘Gen. Sg.’ / Typen ‘Nom. Pl.’ vs. Typ2 ‘fellow’ / Typen ‘Gen. Sg. / Nom. Pl.’.

.. Contrasting system adequacy and diagrammaticity In §.. we have seen a couple of examples which apparently contradict the claim that natural change consists of a markedness reduction favoring the universal preference for diagrammaticity because non-iconic coding, the zero-marking required by the OHG SDSP2 [Sg. = Pl.], replaces iconic coding, namely the additive marking of herza / herz-un. Also the case of the German weak masculine nouns reveals a similar contradiction insofar as the new form Funken / Funken ‘spark(s)’ reduces the iconic marking of the earlier Funke ‘Sg.’ / Funken ‘Pl.’. In other words, system adequacy is improved at the expense of diagrammaticity, or, to put it in more general terms, the system-dependent naturalness takes precedence over the universal naturalness. This conclusion has dramatic consequences for the idea that morphology displays a strong semiotic motivation grounded on Peirce’s diagrams because it implies that, in principle, system adequacy may be increased by systematically expanding non-iconic signs at the expense of iconic marking. And in fact a clear-cut example supporting this embarrassing conclusion can be drawn from Milanese, the dialect spoken in Milan (cf. Salvioni ). Here, the nominal system has developed towards a complex set of at least six different ICs, which are distinguished on the basis of extra-morphological properties such as gender, phonological ending, and animacy (cf. Gaeta  for a detailed discussion) (Table .). Notice that the occurrence of the article helps the speakers identify the gender in the singular. Note also that gender is completely neutralized in the plural since the articles are identical for all classes and both genders. This is particularly relevant for the IC- which is the largest class and the only one to include masculine and feminine nouns. In this class, gender—neutralized in the plural as in the other ICs—can be inferred in the singular only thanks to the article, while in the other classes other properties are of help. In the quite small IC- and IC-, plural marking, which is encoded via substitutive markers, allows the speakers to infer the gender because these classes contain only masculine nouns. In the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Table .. Nominal ICs in Milanese IC-

IC-

IC-

[M] [+M] [+M] [+animate]

IC-

IC-

IC-

[+M]

[M] [M] [+animate] /a/#

Extra-morphological properties

[+M]

Singular

el mur la red ‘wall’ ‘net’

el scior ‘mister’

el basin el capel la sciora ‘kiss’ ‘hair’ ‘lady’

la scala ‘staircase’

Plural

i mur

i sciori

i basett

i scal

i red

i capej

i sciori

other three ICs, gender clusters with other properties, namely with animacy in the IC- and the IC-, and with the phonological a-ending in the IC- and the IC-, which contain only feminine nouns. As we have seen in §.., in such a complex cluster of properties the PSCs help the speakers keep the morphological complexity under control by making reference to extramorphological properties in the form of implicative relations. Only extra-morphologically motivated ICs are predicted to be stable and to expand, acquiring newcomers from other ICs. This is what happened to the IC- which is clearly identified by the following PSC holding that feminine nouns ending with /a/ form their plural by Vowel Deletion (= VD): ()

2

3 Noun PSCIC-6: 4 M 5  ½Pl:=VD =a=#

The peculiar character of this IC is due to the effect of a phonological change which has deleted all final vowels except /a/ in Milanese: Lat. ¯() ‘wall’/¯ī, ¯() ‘staircase’/¯ > Mil. mur/mur, scala/scal, etc. In spite of the anti-iconic nature of this subtractive plural marking, the IC- has acquired newcomers such as carna/carn ‘meat (s)’ and vesta/vest ‘dress(es)’ which go back to the Latin ancestors ()/¯ () and ()/¯ () and are expected to appear as *carn/carn and *vest/vest, similarly to red/red of the IC-. Far from being eliminated as in the Franconian German example hond / hon seen in §.., the anti-iconic IC- displaying subtractive marking was expanded by adding a final /a/ to the singular form because of the clear PSCIC-6 in () which facilitates the retrieval of feminine nouns provided with the corresponding extra-morphological property. Two conclusions can be drawn from this case. First, an arguably natural morphological change intended as a language improvement or markedness reduction can militate in favor of anti-iconic marking, that is, can have an utterly unnatural effect. As observed by Wurzel (: ), “system-independent naturalness can induce morphological change only if this does not contradict system-congruity.” However, this conclusion is tempered by an “ecological” tendency towards the overall sustainability of the system, in which the ICs are preferably anchored to easily detectable PSCs and benefit from generally uniform SDSPs such as, for instance, the combined expression of article and suffix for plurality as in the Milanese example. Thus, system adequacy aims to maximize the lexical

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 



recoverability of the inflectional behavior by means of PSCs, even if this is done at the expense of the universal naturalness represented by Peirce’s diagrams. Although it remains to be understood how far this dialectic tension between system adequacy and universal naturalness can go, this conclusion is quite comforting, because it provides an optimal base on which we can attempt to understand what the limits are that a morphological system can sustain. In this regard, Baerman, Brown, and Corbett (: ) take up the distinction between system-dependent and universal naturalness insofar as the former is languagespecific and typically results from phonological change while more widespread patterns of syncretism usually reflect “common or universal elements of feature structure” which “are available to all languages”. Second, by enhancing the lexical coverage of a PSC through the generalization of its extra-morphological properties, an important side-effect is reached, namely the reduction of lexical specification. This contributes to the simplification of the morphological system, insofar as the scope of a PSC is enlarged, even if in some cases the system can become more complex as a whole. In sum, the conflicting nature of the preferences and of the predictions suggested by NM “should not be seen as an admission of defeat: although much remains to be done, Natural Morphology represents a step forward in its acceptance of interaction between the universal and the language-specific, between morphology and other components of the grammar, and between synchronic morphology and morphological change” (McMahon : ).

. C  

.................................................................................................................................. NM has attracted the interest of a large number of scholars working mainly, although not exclusively, in Europe (see Dziubalska-Kołaczyk  for a first appraisal). Probably one main point of attraction of the theory has been its high flexibility, that is, the capacity of providing principled answers to conflicting questions, within a functionalist understanding of language as a psychological and historical phenomenon. This has also been generally recognized by scholars who do not necessarily subscribe to all the tenets of NM but see in the concept of naturalness a useful term for accommodating “complex chains of causation”. It is useful because it keeps distinct the different factors involved inasmuch as “[n]aturalness itself is a function of a large number of factors, including transparency” while “frequency is a result of naturalness” (Bauer : ). Moreover, naturalness, taken in all its universal and system-specific aspects, can easily be accommodated to other approaches which are centered more on economy of expression and structure (cf. Nübling ; Carstairs-McCarthy , ) and on markedness relations (cf. Andersen ). On the other hand, the flexibility of NM can also be seen in the attempt to draw attention to phenomena or areas of morphology which have traditionally been considered marginal, as, for instance, the so-called extra-grammatical morphology (cf. Doleschal and Thornton ), particularly rich in the domain of trade names (cf. Ronneberger-Sibold  for a recent overview), or the large area covered by morpho- and socio-pragmatics (cf. Merlini Barbaresi ; Gaeta c). Finally, the questions and the principled answers provided by Wurzel’s model of systemdependent naturalness still require to be carefully checked against other competing models

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

such as for instance Stump’s () realizational model of inflection which is at odds with the incremental approach adopted by NM, especially with regard to its predictive force and the diachronic dimension constantly present in the NM perspective (for an attempt on French verbs, cf. Kilani-Schoch and Dressler ). All of this remains a desideratum for future research.

A Thanks are due to Jenny Audring, Francesca Masini, and three anonymous reviewers for helpful comments.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

               ......................................................................................................................

 . ,  ,   

. I

.................................................................................................................................. T common opposition between ‘word’ and ‘morpheme’ tends to obscure, rather than elucidate, what appear to be two distinct approaches to the study of morphology. While both examine the ‘structure’ of words, they do so from very different perspectives, assuming very different objects of analysis and arriving at very different results. The ‘word-based’ tradition regards morphology as the ‘branch of linguistics which is concerned with the ‘forms of words’ in different uses and constructions’ (Matthews : ): the emphasis is on the ways that the internal structures provide whole words with distinctive shapes and how these faciliate the development of patterns of relations between them. The ‘sub-word’ tradition treats morphology in terms of the internal structure of words as morpheme combinations and the nature of the processes responsible for their disassembly and reassembly. The two basic approaches fall within parallel lineages that derive from different branches of ancient Indo-European grammatical traditions, as schematized in Figure .. Morphology from a Graeco-Roman, word and paradigm (WP), perspective can be construed as the study of part–whole organization, with respect simultaneously to the internal and external organization of words. What elements occur as parts of individual words, and how are words themselves organized into larger patterns in morphological systems? This suggests a division of morphology into the study of syntagmatics, which explores the internal organization of words, and paradigmatics, which explores the external organization of relations between words. The role that syntagmatic variation plays in determining a higher-level paradigmatic classification within a classical WP model is cogently expressed in Matthews (): In the ancient model the primary insight is not that words can be split into roots and formatives, but that they can be located in paradigms. They are not wholes composed of simple parts, but are themselves the parts within a complex whole. In that way, we discover different kinds of relation, and, perhaps, a different kind of simplicity. (Matthews : )

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

Sanskrit Grammarians

Graeco-Roman Tradition

Altindische Grammatik

Neoclassical WP

Neogrammarians

‘American Structuralism’

Realizational WP

‘European Structuralism’

Morphemic analysis in lexicalist and non-lexicalist frameworks

Information-theoretic and Simulation Based Modelling

 .. Morphological lineages

.. The locus of morphological variation Much of the variation exhibited by morphological systems is manifested by segmental or suprasegmental contrasts within individual words. Yet detailed analyses of morphological systems, inflectional and derivational, reveal principles of organization that apply to more than just linear (or hierarchical) arrangements of ‘recurrent partials’. The analysis of a system also requires the recognition of word-sized (or larger) units as basic objects of morphological inquiry. In some cases, these units are synthetic formations, comprised of single contiguous sequences. In other cases, morphological units are periphrastic constructions consisting of multiple free forms. It is only by expanding the scope of morphology to include these kinds of larger units that the principles underlying patterns of relatedness can be discovered. Given the general applicability of a part–whole approach to internal and external dimensions of morphological organization, it is reasonable to ask why descriptivist approaches initially neglected the paradigmatic dimension, and what features of morphological systems might have favoured the persistence of this bias in the subsequent generative tradition.1 Intriguingly, the challenges faced by a purely syntagmatic approach would have been familiar to the descriptivists from their schooling in Latin and Greek. However, the Post-Bloomfieldian goal of developing a ‘science of linguistics’ came to involve not only a rejection of classical models but also a neglect of the patterns that had motivated these models. By the time that Matthews (: ) reminds modern readers of the empirical and methodological challenges that arise in decomposing words in classical Greek and Latin, it is the languages that appear anomalous, not the a priori assumption of a basic agglutinative structure. Many linguists tend to boggle at such systems. They seem complicated, while agglutinating systems seem so simple. They may even seem perverse. Why should a language have rules which obscure the identity and functions of its minimal elements?

1

The historical origins of the syntagmatic model are discussed in Hockett (), Matthews (), and Blevins ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



From the perspective of WP models, the examination of part–whole relations in these ‘perverse’ systems is particularly instructive, since they reveal the underlying commonality between ‘complicated’ and ‘simple’ systems. These common features provide a stable basis for models that combine broad empirical coverage with cognitive plausibility. As illustrated in this chapter, fundamental principles of part–whole organization not only scale to apply to languages of varying complexity, but also clarify that the appearance of ‘obscurity’ is a symptom of a limiting idealization.

.. Models and classification It is customary, following the tradition initiated by Hockett (), to regard models as ‘word-based’ or ‘morpheme-based’. Yet this unit-based classification is misleading: it is ultimately the role and status of words within competing types of analysis that is essential. For present purposes, it is important to distinguish two dimensions of wordhood. The  dimension concerns the ‘grammatical meaning’ or ‘features’ associated with words. The  dimension concerns the formal constitution of words. One class of approaches is ‘word-based’ in the sense that they treat the features associated with paradigm cells or ‘morphosyntactic representations’ associated with word-sized units as basic ‘units of meaning’ in a grammatical system. A second class of word-based approaches treats wordforms as morphotactically basic and regards roots, stems, and exponents as abstractions that serve to discriminate the distinctive shapes and patterns of word types. The discriminative function of form variation represents a central difference between morphotactically word-based approaches and accounts that treat sub-word units as basic. In models that are word-based, morphotactically and morphosyntactically, words function as a primary locus of grammatical meaning, as a maximally stable unit of form and as basic units of larger paradigmatic structures. In less consistently word-based accounts, wordforms and paradigmatic structures tend to be viewed as derivative or ‘epiphenomenal’. However, this assessment largely makes a virtue of necessity, given the difficulties that arise in reconstructing wordforms and paradigmatic systems from inventories of genuinely independent sub-word units. Although a distinction between word-based and ‘morpheme-based’ models imposes a fairly crude split, it suffices to classify the main contemporary approaches to morphological analysis. A first cut can contrast a class of word-based approaches with accounts that only recognize morphemes as morphological units of analysis. The most consistently word-based approaches are classical word and paradigm models of the type familiar from the description of classical languages. In these models, the word is the smallest grammatically meaningful unit and surface words are also the basic elements of form, since there is no unit recognized between words and sounds. The realizational WP tradition that grew out of the work of Matthews (, ) is somewhat less uniformly word-based. This tradition includes, among other approaches, the Extended WP model (Anderson )/A-Morphous Morphology (Anderson ), Paradigm Function Morphology (Stump ; Bonami and Stump ), Network Morphology (Corbett and Fraser ; Brown and Hippisley ), and the family of allied realization-based and lexeme-based approaches (Zwicky a; Aronoff ; Beard ). These approaches all preserve the morphosyntactic assumption that (grammatical) words are the primary locus of grammatical meaning.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

But their morphotactic assumptions align more with those of morphemic models, in which surface wordforms are assembled from more basic elements. Hence, they can be classified as morphosyntactically word-based but morphotactically  (Blevins ), in that they assume a model of the lexicon in which open-class items are represented by stems (and/or roots), and surface wordforms are constructed by applying rules that successively modify lexical stems. The focus of these models is on building individual words, not identifying patterns that distinguish or relate words to one another. In modern realizational approaches, analysis is an interpretive process, in which ‘bundles’ of distinctive features are ‘spelled out’ by realization rules that map features onto forms. The organization of these approaches is elaborated in detail in the works cited above as well as in Montermini (Chapter  this volume) and Stump (Chapter  this volume). In these models, the paradigm cells or morphosyntactic representations that represent the properties of words are basic, but the wordforms that realize those properties are not only derived but also nonpersistent. In no realizational model is the output of all possible realization processes cached in any kind of full-word lexicon. The different morphotactic assumptions adopted in classical and modern word-based models yield alternative strategies for expressing content–form relations. Classical models represent these relations by sets of exemplary patterns, together with (proportional) analogies that extend patterns to new items. Internal structure is important not because it provides inventories of pieces useful for deriving words, but because it exhibits the dimensions of variation that define the organization of morphological systems. The remainder of this chapter clarifies how word-internal variation determines the paradigmatic structure of WP models, in their classical and pedagogical formulations, as well as in more recently formalized variants.

. W  

.................................................................................................................................. Reflecting the pedagogical origins of classical models, the pivotal role of words and paradigms in these models derives from considerations of morphosyntactic and morphotactic stability and predictive reliability. As units of form and exponents of grammatical properties, word-sized units are more stable and consistent than the sub-word units that are isolated in models of morphemic analysis.2 The closed and uniform feature space defined by inflectional paradigms likewise sanctions highly reliable deductions about the existence and shape of other words in a paradigm. The stable grammatical information associated with a word not only serves to identify its own meaning and function, but also locates it within an inflectional paradigm or derivational ‘morphological family’ as well as within the larger morphological system. In this way, the information associated with a wordform facilitates deductions about other forms, based on systematic interdependencies in a language.

2 Traditional philological arguments for the stability and informativeness of words are reinforced by compression-based measures of the locus of regularities within a language (Geertzen, Blevins, and Milin ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



Aspects of form that sanction implications express a type of information that can be modelled by notions developed within Information Theory (Shannon ). In the case of an inflectional paradigm,  or uncertainty corresponds to the form variation exhibited by paradigm cells. The informativeness of a given form correlates with the degree to which knowledge of that form reduces uncertainty about the shape of other words within the same paradigm. A notion of uncertainty reduction is clearly implicit in the traditional use of ‘diagnostic’ principal parts, where wordforms or sets of wordforms are identified as reliable guides to the shapes of other wordforms. The utility of principal parts rests on their value in reducing uncertainty about other forms. In the idealized pedagogical case, principal parts would reduce all uncertainty, although the complete elimination of uncertainty is neither feasible nor necessary in actual language learning or use. The split between ‘predictive’ principal parts and ‘predicted’ forms also reflects the pedagogical goals of a classical grammar. Nearly all wordforms are informative to some degree about the shape of other wordforms. The more closely related, the more mutually informative forms tend to be, ranging from members of an inflectional paradigm, through inflection classes and derivational paradigms, outward to morphological systems and language families. Measures of interpredictability are largely restricted to word-based models, given that predictability cannot be reduced to ‘derivability’ or to any other relation defined over sub-word units by morphemic or other types of constructive approaches. These measures receive a measure of external validation from their contribution, in conjunction with discriminative learning models, to explanations of the learnability of complex morphological systems.

.. Two models of grammatical description In what, following Hockett (), have come to be known as Item-and-Arrangement (IA) models, the task of morphological analysis involves identifying the constitutive pieces of complex words and the nature of the reassembly processes that reconstitute the whole from its parts. In Hockett’s terms ‘morphology includes the stock of segmental morphemes, and the ways in which words are built out of them’ (Hockett : ). Words, accordingly, are concatenations of individually meaningful sub-word units, whose distribution is regulated by rules specifying their linear and hierarchical arrangement (with ‘morphophonemic’ contextual adjustments as needed). Yet as observed in Jackendoff () and Bochner (), any gains in descriptive economy achieved by morphological decomposition are undermined by, among other factors, the redundancy between the rules proposed to build words and the limitations of these rules in constructing various types of complex words.3 In a second variant of morphemic analysis, termed the ‘Item-and-Process’ (IP) model by Hockett (), the meanings of words are not associated directly with units of form, but instead with the  that apply to build complex words from lexical stems. As with See Jackendoff and Audring (Chapter  this volume) for further discussion of these limitations, and Jackendoff (), Aronoff (), Bochner (), and Barr () for metrics that measure the cost of morphological  rather than the degree of (presumed) segmental redundancy. This perspective will be briefly explored in §. below in relation to discriminative Information-Theoretic WP models. 3

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

IA approaches, IP approaches are centrally concerned with building words from simpler components—in this case stems and processes—guided by the goal of reducing or avoiding redundancy in linguistic representations. Surface words and their organization into paradigms again have no direct status in a morphology system and questions concerning their relevance do not even arise. Both models are concerned exclusively with the assembly and disassembly of individual wordforms in isolation, so  between words fall outside their descriptive scope. The word is not regarded as a primary object of morphological analysis but functions instead as the ‘output’ of processes that assemble forms from their constitutive pieces or as the ‘input’ to processes which disassemble forms into their ultimate constituents. From this perspective, words are only of interest to the extent that they provide clues for identifying minimal elements and operations. The idealization that individual words can be analysed wholly in isolation precludes the investigation of relations between words or principles that organize words into networks of related forms. Given these severe limitations, words and paradigms can only occupy an ‘epiphenomenal’ status within models of morphemic analysis. On these ‘morpheme-based’ conceptions, morphological analysis encompasses a limited stock of research questions. Analysis involves the search for minimum individually meaningful units of form, along with the batteries of rules that determine their shape and distribution in larger expressions. Following Bloomfield (: ), this enterprise is guided by a priori notions of ‘scientific compactness’ and avoidance of segmental redundancy. Among the questions that fall outside the scope of this perspective are those that concern relations between surface words and the organization of words into larger collections within a grammatical system.

.. In defence of WP The limitations of morphemic models were already clear by the publication of Hockett (), and Hockett () provides a first-hand account of the origin, development, and retrenchment of these models. Although Hockett was familiar with the classical tradition (and responsible for the moniker ‘Word and Paradigm’), Robins () was the first to take up the challenge implicit in Hockett’s (: ) assertion that ‘WP deserves the same consideration here given to IP and IA.’ Recognizing that ‘[t]he discussion can be carried no further until the word-and-paradigm approach has been characterized at least as clearly as current versions of morphemics’, Matthews (: ) addressed the task of formalizing the classical WP model. Although many of the technical details of this initial model have since been revised, the conception of analysis outlined in Matthews () remains largely intact: The word is its central unit, and the grammatical words (the Vocative Singular of BRUTUS, for example) are minimal elements in the study of syntax. At the same time, the intersecting categories form the framework or matrix within which the paradigm of a lexeme can be set out. If a schoolboy is asked to recite the paradigm of  or the paradigm of  he will deliver the sets of word-forms (mensa, mensa, mensam . . . ; amo, amas, amat . . . ) in an order which explicitly or implicitly addresses their assignment to the individual Cases, Persons, and so on. (Matthews : )

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



Although developed to describe the complex patterns of exponence in classical languages, the motivation for a paradigmatic treatment of inflectional patterns is often compelling in much simpler systems. In an early generative sketch of German morphology, roughly contemporary with Matthews (), Chomsky () briefly summarizes the complications that arise in representing declensional patterns in terms of discrete case, number, and gender morphemes, and concludes: I know of no compensating advantage for the modern descriptive reanalysis of traditional paradigmatic formulations in terms of morpheme sequences. This seems, therefore, to be an ill-advised theoretical innovation. (Chomsky : )

Two of the problems that Chomsky identifies with morpheme-based models concern (i) the reliance on unrealized (‘null’) morphological expressions, and (ii) the need to impose a fixed order on ‘sequences’ of realized and unrealized elements. These issues are illustrated by the contrasts exhibited by the forms of the Mordvin noun KAL ‘fish’ in Table .. There are no overt markers for nominative or singular in the Mordvin indefinite declension. Consequently, within a strict morphemic model, one zero marker would be required for nominative case and another for singular number, and an order would need to be imposed on these elements. In principle, the two zero markers could occur in either order. The nominative plural definite form kal-t-ne identifies only one order of plural markers, with the basic plural marker ‑t preceding the definite plural marker ‑ne. However, the elative forms indicate that this order is not invariant, and that there is also no invariant order between case and number markers. Both of the possible orders are possible, with variation conditioned by number. In the singular, the definite marker ‑ńt ́ follows the case marker ‑sto; in the plural, both the basic plural marker ‑t and the definite plural marker ‑ne precede the case marker ‑ste. Hence even in the case of overt markers, it is not possible to establish an invariant order. Introducing ‘unrealized’ markers can only make this recalcitrant problem even more intractable. The postulation of abstract hierarchical structure likewise expands the space of unobservable variation. As reflected in Chomsky’s characterization of morphemic analysis as ‘an ill-advised theoretical innovation’, the challenges presented by morpheme ordering are self-inflicted problems. Like the other classes of problems discussed from the outset of the morphemic tradition (Harris  and Hockett ), these problems lack analogues in other morphological models, and have no clear relevance to morphological acquisition or use. Speakers must be able to discriminate the forms of a system and determine their function, within the system. But there is no established need to impose a consistent ordering on the contrasts

Table .. Partial paradigm of Mordvin  ‘fish’

Nominative Elative

Indf Sg

Indf Pl

Def Sg

Def Pl

kal

kal-t

kal-oś

kal-t-ne

kal-sto-ńt ́

kal-t-ne-ste

kal-sto

Source: Rueter : ff.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

Table .. Case sets and patterns in Udmurt C: C: C:

Abessive/Caritive, Ablative, Adverbial, Approximative, Dative, Genitive Egressive, Elative, Inessive, Illative, Instrumental, Prolative, Translative Terminative

P: P: P:

Stem–Person/Number Marker–C Stem–C–Person/Number Marker Stem–Person/Number Marker–C or Stem–C–Person/Number Marker

abstracted from those forms, let alone extend an ordering to ‘unrealized’ markers. WP models do not offer a solution to the problem of morpheme ordering; instead they impose analyses that avoid creating this kind of problem in the first place. In the Erzya Mordvin patterns in Table ., which obtain across all fifteen cases, the alternation does not depend on specific case values. An instructive contrast is found in Udmurt (Kel’makov and Hännikäinen ), in which order is sensitive to the choice of markers. Table . sets out the basic sets of markers and associated morphotactic patterns. Examples of each pattern are presented in (). In each pattern a stem precedes a possessive marker and a member of a specified set of case markers. This type of data, together with recent research on variable affix orders (Luutonen ; Bickel et al. ; Plag and Baayen ), casts substantial doubt on the general applicability of approaches that describe the organization of morphological systems in terms of arrangements of minimum meaningful elements. Insofar as order is salient, it will tend to serve a discriminative function in distinguishing words or larger units that play a communicative role in a language. However, this function does not depend on basic (let alone universal) orderings of inflectional markers or other formatives. As with the challenges that confound attempts to impose order on ‘unrealized’ elements, the difficulties that confront an order-based model arise immediately and point to a basic deficiency in the underlying conception of morphological analysis. ()

Variable marker order in Udmurt a. P pi-ed-lị boy-.- ‘to your boy’ b. P pi-en-id boy--. ‘with your boy’ c. P (where the Person/Number Marker is Sg) busi-jed-oź ~ busi-ioź-ad field-.- ~ field--. ‘up to your field’

To some extent, discussions of element ordering underestimate the scale of the challenge, by presupposing the availability of general solutions to the problem of isolating minimum

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



‘units of form’ and specifying minimum ‘units of meaning’. The same is true of debates about the status of ‘biunique’ form–meaning relations. In both cases, the Segmentation Problem (Lounsbury ; Spencer ) must be addressed in order for the arrangement and interpretation of segments to be determined. The atomization of ‘meaning’ raises analogous challenges, although these tend to be obscured by the use of disparate classes of ‘features’ as proxies for meaning. In many cases, these challenges can be met or at least attenuated by shifting the focus to larger units. To rephrase Robins () and Matthews () in more contemporary terms, identifying the shape and specifying the grammatical meaning of an inflected word involves less uncertainty in general than identifying the shape and specifying the meaning of the component formatives. As has been observed since at least Bloomfield (b), an analysis in terms of word-sized units does not eliminate uncertainty. Nevertheless, a variety of factors support the traditional view that words enjoy a stronger claim to psychological reality than their component parts.4

.. Periphrastic expression Word-based models exhibit two principal divergences between basic morphosyntactic and morphotactic categories. As mentioned in §.., a model may regard words (corresponding to paradigm cells or morphosyntactic representations) as grammatically basic but treat stems and formatives as morphotactically basic. Conversely, a model may again treat grammatical words as basic but recognize morphotactic units that consist of multiple free forms. The ‘compound tense’ analysis of periphrastic verb constructions provides one familiar case (Curme , ). Nominal case and number inflection in Tundra Nenets illustrates a further pattern (Figure .). In Tundra Nenets, the three ‘grammatical’ cases: nominative, accusative, and genitive, are normally distinguished from the remaining four ‘local’ cases. A property common to grammatical cases is that they are realized synthetically in all three numbers, whereas local cases exhibit number-sensitive variation. Singular and plural forms of local cases are realized synthetically. However, local dual forms consist of an invariant dual form and Sg

Dual

Pl

Nominative

ti

texәh

tiq

Accusative

tim

texәh

ti

Genitive

tih

texәh

tiq

Dative

tenәh

texәh n’ah

texәq

Locative

texәna

texәh n’ana

texәqna

Ablative

texәd

texәh n’ad

texәt

Prolative

tewna

texәh n’amna

teqm

 .. Synthetic and periphrastic exponence in Tundra Nenets

4

See Blevins (: §) and Geertzen, Blevins, and Milin () for recent summaries of some relevant evidence, Dixon and Aikhenvald (a) and Haspelmath () for more sceptical perspectives, and Arkadiev and Klamer (Chapter  this volume) for a typological overview of the notion of the word.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

an appropriately case-inflected postposition. As Ackerman and Stump () observe, the variation between synthetic and periphrastic realizations in Tundra Nenets parallels a pattern observed in languages with only synthetic realizations. Specifically, the morphosyntactic markedness of case and number combinations tends to be reflected by the formal markedness of their synthetic encodings. A similar correspondence obtains in Tundra Nenets nominal paradigms, where the most marked feature combination, that is, local case and dual number, is realized by the most marked realization, that is, periphrasis. Insofar as these types of generalization are not reducible to independent factors (such as time of grammaticalization), they reinforce the functional parallels between synthetic and periphrastic formations in morphological systems, and support a view of morphological analysis that includes synthetic and periphrastic modes of expression. This conception leads in turn to a reconsideration of the demarcation of morphology and syntax, echoing the deliberations of Matthews () at the outset of the WP revival. The history of morphology since the s has led to a progressively complex and non-patent relationship between the elements of grammar, on the one hand, and their phonological realization on the other: Is there any reason why the domain in which an element may be realized should be kept within traditional limits? (Matthews : )

Word and Paradigm models are usefully agnostic on this point. WP models do not require that paradigm cells (or morphosyntactic representations) be expressed by single synthetic expressions. As in traditional descriptions, multi-word morphological expressions can be associated with paradigm cells and integrated within a patterned morphological organization that includes synthetic expressions.5

.. Parts and wholes From a Word and Paradigm perspective, there is no reason to expect morphosyntactic and derivational properties to be expressed by units of the same granularity in all languages. The functional load of a morphological system may, in principle, be distributed across units of varying sizes in different languages, depending on other properties of the languages. In the case of the familiar (mainly Indo-European) languages that have had the greatest influence on contemporary models, it is generally acknowledged that words are more useful than sub-word units for practical descriptions, including teaching grammars, dictionaries, and reference grammars. Even Bloomfield (: ) concedes that ‘[f ]or the 5

Compelling evidence of the centrality of paradigmatic relations to an understanding of the morphological character of periphrastic expressions can be found in the Algonquian studies of Goddard () and LeSourd () and in the cross-linguistic account of Bonami (). A considerable body of recent work identifies criteria for classifying particular multi-word expressions as morphological, and distinguishing them from syntactic phrases. See Ackerman (); Ackerman and LeSourd (); Börjars, Vincent, and Chapman (); Ackerman and Webelhuth (); Ackerman and Stump (); Brown et al. (); Bonami (), among others. Parallel arguments for morphological periphrasis in the derivational domain are given in Ackerman (); Masini (); Booij (a); Masini and Benigni ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



purposes of ordinary life, the word is the smallest unit of speech.’ For proponents of traditional WP models, the practical benefits of word-based descriptions also carry over to the use of words for ‘the systematic study of language’ (Bloomfield : ). This reflects the belief that the same properties that make words useful for practical descriptions, notably a stable relation between forms and morphological classes and grammatical meaning, are of equal value to theoretical accounts. The word is, as Robins (: ) emphasizes, ‘a grammatical abstraction’ from the speech stream, but it is a maximally useful abstraction, whether for abstract analysis or for more practical purposes. The word is a more stable and solid focus of grammatical relations than the component morpheme by itself. Put another way, grammatical statements are abstractions, but they are more profitably abstracted from words as wholes than from individual morphemes. (Robins : )

From the perspective of a WP model, nothing precludes the possibility that simple units of grammatical meaning might stand in correspondence to minimal units of form in some languages. An agglutinating pattern could arise in a language whose morphotactic organization preserved the structure determined by successive waves of morphologization. However, there is no motivation within the WP tradition to regard this pattern as normative, given that the tradition was initially developed to describe languages in which ‘categories and formatives are in nothing like a one-to-one relation’ (Matthews : ). For a familiar example, we can turn to the exemplary Classical Greek verb elelýkete ‘you had unfastened’ in Figure .. The full wordform elelýkete stands in a biunique relation to the second person past perfective indicative active cell in the paradigm of lyo ‘unfasten’. But as Matthews (: ) observes, the realization of aspect and voice in Classical Greek verbs confounds any attempt to establish biunique property–formative relations.

.. Gestalt exponence Even more acute difficulties arise in associating properties with formatives in cases of what has been termed ‘gestalt exponence’ in Ackerman, Blevins, and Malouf (). The observations that the properties of words are not in general reducible to the properties of their parts applies not only to the properties of individual words but also to relations between words. Just as the grammatical meaning of a word cannot always be broken down into discrete units of meaning that are assigned to sub-word formatives, relations between the elements of a paradigm cannot invariably be reconstructed from relations between their parts. Irreducibly word-level properties bring out a basic asymmetry between wholes and parts. The formatives that make up a word may uniquely identify the place that the word occupies in its inflectional paradigm or, more broadly, within the morphological system of e past

le

lý k perf

e pst ind

te 2nd pl

active

 .. Morphological analysis of Greek elelýkete Source: Matthews (: ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

a language. But if the same formatives occur in different combinations in other forms, as is often the case, it is not possible to associate discrete meanings with individual formatives. Much the same is true of implicational relations. In most cases where a stem or exponent is of predictive value, the value is preserved by a word containing the stem or exponent. But in cases where the predictive value of a word is keyed to the absence of an element or to distinctive combinations of elements, the predictive value of the whole is lost when it is disassembled into parts. This is because combinations are distinctive and the meaning of a whole word is often more than the meanings of its parts. A simple example will help to clarify how combinations of elements can have distinctive meanings and predictive values within a language. The first four data columns in Table . contain the singular grammatical case forms of a group of nouns that exhibit productive ‘weakening’ gradation in Estonian (in which nominative forms are based on a ‘strong’ stem, marked here by the double consonant ‑kk, contrasting with genitive forms based on a ‘weak’ stem in ‑k). Each row of forms exhibits two dimensions of variation: the choice of a strong or weak stem and the presence or absence of one of the lexically specified ‘theme vowels’ a, e, i, and u. Now consider the locus of the property ‘partitive singular’. The partitive singulars of this class contain two ‘recurrent partials’: a strong stem and a theme vowel. Thus sukka can be analysed as sukk + a, kukke as kukk + e, pukki as pukk + i, and lukku as lukk + u. But partitive case cannot be associated either with strong stems or with theme vowels in isolation. The strong stems sukk, kukk, pukk, and lukk cannot be analysed as partitive, because these same stems realize the nominative singular when they occur without a theme vowel, and also underlie the second ‘short’ illative singular forms. Partitive case also cannot be associated with the theme vowels, because the same vowels occur in the genitive and illative singular forms. Hence partitive case is an irreducibly word-level feature that is realized by the combination of a strong stem and a theme vowel. This type of ‘gestalt’ or ‘constructional’ exponence (Booij a) is difficult to describe if stems and theme vowels are represented separately. Because the grammatical meanings associated with strong stems and theme vowels are context-dependent, these elements cannot be assigned discrete meanings that ‘add up’ to partitive singular when they are combined. From a traditional perspective, this contextdependence underscores the difference between ‘analysability’ with respect to word internal structure and claims about the morphemic status of word internal structure. An individual wordform is often analysable into parts that recur elsewhere in its inflectional paradigm or in the morphological system at large. But these parts may function solely to differentiate larger forms, so that the minimal parts that distinguish a pair of wordforms cannot be associated with the difference in grammatical meaning between these wordforms. To return to the patterns in Table ., the theme vowel ‑u distinguishes the partitive singular lukku ‘lock’ from the nominative singular lukk. In isolation, however, the vowel ‑u neither realizes a specific case value nor expresses ‘the grammatical difference’ between nominative and partitive. Exactly the same is true of the grade contrast between partitive singular lukku and genitive singular luku. Furthermore, the implicational value of a form often depends essentially on the paradigm cell—or, more generally, the grammatical properties—that it realizes. A strong partitive singular such as lukku identifies  ‘lock’ as a first declension noun and permits the deduction of the other forms in its paradigm (Blevins ). However, in isolation the strong

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



Table .. Singular nouns in Estonian Nominative Genitive Partitive Illative 

sukk suka sukka sukka ‘stocking’

kukk kuke kukke kukke ‘rooster’

pukk puki pukki pukki ‘trestle’

lukk luku lukku lukku ‘lock’

lugu loo lugu lukku ‘tale’

Source: Erelt, Erelt, and Ross ; Blevins .

vowel-final lukku provides limited information, because in the paradigm of a noun such as  ‘tale’ in Table . it may only realize the short illative singular and thus be dissociated from the other forms of the noun. These examples illustrate some of the fundamental limitations of stems qua forms. While it is often possible to identify stems from the wordforms that they realize or underlie, the separation of stems from exponents raises recalcitrant problems in many languages. As Spencer () notes, this is a familiar issue in Romance languages, where motivating a particular segmentation among several alternatives is a perennial problem. From a WP perspective, this type of challenge is an artefact of a flawed method of analysis. Determinate segmentations can arise and remain stable within specific languages. However, from the standpoint of an implicational WP model, it is unsurprising that different, and possibly overlapping, sequences may be of value in predicting different patterns. A stem and theme vowel may be useful for identifying the lexical class of an item, whereas the vowel and inflection—or even a stem consonant, vowel, and inflection—may predict patterns of inflection. This gestalt-based conception of word structure is implicit in the rules and correspondences of classical grammars, and is more explicit in the (proportional) analogies developed by Neogrammarians such as Paul (). As remarked by Morpurgo Davies (: f.), much of the initial appeal of analogy derived precisely from the fact that ‘it offered an algorithm for a structurally based form of morphological segmentation, without making any claims about the segments in question.’

. T ‘  ’ 

.................................................................................................................................. As the moniker ‘word and paradigm’ suggests, WP approaches assign a special status to words, and attach grammatical significance to inflectional paradigms and other collections of words. However, this designation is somewhat misleading; models should in fact have been ‘item and pattern’, where comparison of the item against the pattern sanctions the deduction of new forms. Reclassifying traditional WP approaches as specific instantiations of a general ‘item and pattern’ model is of more than purely historical interest. This characterization highlights the fact that the model is defined less by the units it recognizes than by the relations it establishes between units. Instead of disassembling a language into inventories of ‘atoms’ that can be combined to build larger units, WP analyses focus on the

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

implicational structure defined over networks of interrelated elements. The privileged status of words in these models does not rest on claims about their epistemological priority, or their place within procedures of classification or methods of analysis. Instead, the status of words is due to their relative informativeness, as reflected in Robins’ claim above that ‘the word is a more stable and solid focus of grammatical relations than the component morpheme by itself ’ (Robins : ). A second dimension of informativeness is expressed in ‘the general insight that one inflection tends to predict another’ (Matthews : ). In this domain, the primary locus of form-based implication is again ‘words as wholes, arranged according to grammatical categories . . . distinguished by their endings’ (Matthews : ). The role of paradigms likewise follows from the fact that implications are most reliable within the essentially closed and uniform feature space of an inflectional paradigm.

.. Morphological organization The networks of interdependencies within an inflectional system allow it to be factored into exemplary paradigms and sets of principal parts. This traditional factorization rests on a genuine insight about the structure of morphological systems. Paradigms conceived as consisting of fully independent forms cannot be factored in this way, because such a ‘paradigm’ cannot be identified by any subset of its forms. It is this essential interdependency, rather than numerical bounds or extrinsic constraints on paradigms, that accounts for the ‘paradigm economy’ effects discussed by Carstairs (), Carstairs-McCarthy (), and Ackerman and Malouf (). For the most part, classical WP models do not go beyond this basic insight about the relational nature of morphological organization. Classical formulations of WP models also incorporate a range of extrinsic assumptions that reflect the practical uses to which these models have been put. While the prominence of words and paradigms is one obvious assumption, there are various other less productive formal assumptions as well. Inflectional systems are almost always factored into a discrete number of inflection classes, usually with provisions for macroclasses or subclasses in cases where there is considerable overlap. Principal part inventories are also normally ‘static’ in the sense of Finkel and Stump (), in that the same forms or sets of forms, for example the neuter nominative singular or first person singular present, are taken to represent non-exemplary items. The deduction of new forms by matching principal parts against exemplary paradigms is likewise attributed to a symbolic process of the kind represented by (four-part) proportional analogies. Each of these assumptions creates problems that are just as artefactual as those raised by form segmentation or meaning atomization. The difficulty of motivating the choice of principal parts (or ‘leading forms’) is the best known of these problems. One objection to the Priscianic model . . . was that the choice of leading form was inherently arbitrary: the theory creates a problem which it is then unable, or only partly able, to resolve. (Matthews : )

The other assumptions are equally problematic. Although pedagogical grammars often converge on a similar number of classes for a given language, this consensus tends to reflect practical considerations of utility. In the absence of criteria for class identification, the

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



number of classes associated with a language can vary enormously, as in the celebrated case of Estonian declensions. This generates pseudo-problems and competing solutions to them. As with the problem of stem segmentation, the best solution involves developing models in which the artefactuality of these problems becomes plain. For pedagogical purposes, it is useful to draw the most informative cells of a paradigm to the attention of language learners. However, there is no reason to assume that a single form will always suffice to identify the inflectional pattern of an item. Conversely, there is no reason to ignore the partial information supplied by other forms. There is also no reason to assume that a language can be organized into some fixed set of classes. Instead, different sets of interdependent forms will, like segments, be defined by their predictive value. For pedagogical purposes, it will again be useful to bundle these sets of interdependent forms into larger collections that specify the shape of each form of an item, irrespective of how loosely the forms of different sets are connected to each other. The number of such larger connections will depend on the uses that they are meant to perform and, accordingly, the level of detail at which they are defined and individuated. It is also important to acknowledge that there is no principled reason to assume that the analogical processes that deduce new forms of an item should be representable symbolically, rather than sub-symbolically by a reasoning system such as TiMBL (Daelemans and Van den Bosch ).

.. Morphological information Formulating WP models in information-theoretic terms provides strategies to avoid each of these problematic commitments. Each paradigm cell can be associated with a measure of variability or  that correlates with the number of ways that the cell can be realized (and the frequency of those alternatives). One cell is of diagnostic value in identifying the realization of another cell (or set of cells) if knowing the realization of the first cell reduces uncertainty about the realization of the second cell (or set). To formalize these intuitions, it is useful to regard paradigm cells as random variables that take realization ‘outcomes’ as their values. The uncertainty associated with the realization of a cell C can then be defined in terms of the  (Shannon ) of the cell, HðCÞ. ()

HðCÞ ¼ 

X

pðxÞlog2 pðxÞ

x∈RC

In this definition, RC represents the set of realization outcomes for C, x represents outcomes in RC , and pðxÞ represents the probability that C is realized by x. As in Shannon’s original definition, entropy is measured in bits.6 The entropy of a cell is determined both by the number of outcomes and by the uniformity of their distribution. The greatest uncertainty arises in a system with a large number of equiprobable outcomes. Uncertainty is reduced in a system that has fewer 6

Information-theoretic WP models assume standard definitions of entropy and related measures. For discussion of some of the formal issues that arise in using Shannon entropy to measure uncertainty (including the use of a logarithmic scale, the choice of a binary base, the applicable notion of probability, etc.) see Shannon (), Moscoso del Prado Martín, Kostić, and Baayen (), and Milin et al. ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

‘choices’, either few outcomes in total or else outcomes with highly skewed distributions. The cumulative uncertainty associated with a paradigm P depends in turn on the uncertainty of its cells C1 ; C2 . . . ; Cn : On a traditional model, cells are generally assumed to be interdependent, so that the entropy of a paradigm, HðPÞ, will correspond to the   of its cells, HðC1 ; C2 . . . ; Cn Þ: Given a measure of uncertainty, the diagnostic value of an individual cell can be defined in terms of uncertainty reduction. The relevant notion can be based on  , HðC2 jC1 Þ, which measures the amount of uncertainty that remains about C2 given knowledge of C1 . The more information that C1 provides about C2 , the lower HðC2 jC1 Þ will be. If C1 is fully diagnostic, then HðC2 jC1 Þ will approach . If C1 is completely uninformative about C2 , then HðC2 jC1 Þ will preserve the uncertainty of HðC2 Þ. Yet the more uncertain C2 is to start with, the higher HðC2 jC1 Þ will also tend to be. Hence, in order to determine relative informativeness, the original entropy values as well as the conditional entropy values must be known. Both values are incorporated into the general measure in (), which defines   MðC2 jC1 Þ as a value between  and  that is obtained by subtracting from  the proportion of the original uncertainty in C2 . ()

MðC2 jC1 Þ ¼ 1 

HðC2 jC1 Þ HðC2 Þ

The basic notion of uncertainty reduction expressed in () generalizes directly to collections of cells. Given that the uncertainty of a paradigm, HðPÞ, can be defined in terms of joint entropy, the morphological information that a cell C expresses about a paradigm P is expressed in (). ()

MðPjCÞ ¼ 1 

HðPjCÞ HðPÞ

For a concrete example, consider the realizations of the partitive cells in the partial Estonian paradigms shown in Table .. There are five realizations of the partitive singular: str+a, str+e, str+i, str+u, wk+u. If, for the sake of illustration, we assume that each is equally likely, the entropy of the partitive realization will be:   1 1 () HðpartÞ ¼  5  log2 ¼ 2:32 bits 5 5 In contrast, there are only four distinct realizations of the ‘short illative’ (str+a, str+e, str+i, str+u) in Table ., so the entropy of this cell is slightly lower:   1 1 2 2 () Hðill2Þ ¼  3  log2 þ log2 ¼ 1:92 bits 5 5 5 5 Again considering only the forms in Table ., there are five equally likely pairs of partitive and second illative realizations (both str+a; both str+e; both str+i; both str+u; one wk+u, and the other str+a), so the joint entropy Hðpart; ill) is also . bits. So the conditional entropy Hðpart; ill) of the partitive given the illative  is: ()

Hðpartjill2Þ ¼ Hðpart;ill2Þ  Hðill2Þ ¼ 0:4 bits

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



It should be intuitively clear at this point how the selection of principal parts is implicitly guided by entropy reduction. A fully diagnostic cell, such as the partitive singular in the Estonian paradigms in Table ., has a morphological information value approaching  because it all but eliminates uncertainty. Fully non-diagnostic cells (such as the dative, locative, and instrumental plurals in Russian) have a value approaching  because they preserve uncertainty. If diagnostic value were an all or nothing affair, then principal parts could be defined as cells that eliminated the uncertainty associated with the paradigms to which they belong. A cell C would be a principal part for a paradigm P whenever the value of MðPjCÞ approached . Of course nothing guarantees that every class system will contain such fully diagnostic forms. However, any system that can be decomposed into exemplary paradigms and principal parts will consist of partially informative forms which, in various combinations, eliminate the uncertainty associated with a paradigm. The diagnosticity of a set of cells cannot be determined by summing their individual morphological information values, since multiple forms may reduce the uncertainty of the same (or overlapping) cells in a paradigm. Instead, the diagnostic value of a set of forms is measured by their collective morphological information value. Since conditional entropy is also defined for sets of given forms, the diagnostic value of cells C1 ; C2 . . . ; Cn can be determined by generalizing the single cell C in () to the set C1 ; C2 . . . ; Cn . The availability of a range of different diagnostic combinations clearly enhances the robustness of class identification and form deduction, since speakers are not dependent on encountering one uniquely informative form of a paradigm. A classification based on morphological information thus offers a principled solution to the traditional problem of identifying principal parts (or ‘leading forms’). The idea that diagnosticity correlates with variability across inflection classes can be expressed more precisely in terms of the uncertainty reduction measured by the morphological information of a cell or set of cells. From the present perspective, one can see that the choice of leading forms is to a large degree arbitrary. A pedagogical or reference grammar might use seemingly arbitrary criteria to select a particular cell or cell set. A description might select the smallest set of cells, the set with the most highly frequent members, or, more capriciously, the cell or cell set with the morphologically simplest members, etc. Since any fully diagnostic set of cells will do, all are equally suitable and the arbitrariness involved in selecting one is harmless.7 Moreover, given that cells are informative about the realization of other individual cells or sets of cells, there is no need to mediate the deduction of new forms via a reified class structure. Instead of being part of the linguistic system, classes can be regarded as being imposed within a description of the system. Much the same is true of proportional analogies, which merely provide a symbolic representation of the deductions sanctioned by the morphological information that cells express about other cells. In this connection, the present appeal to information theory develops the advantages identified by Hockett (: ) for WP approaches:

7

If the goal is class identification rather than form deduction, there can also be a trade-off between the number of cells required to identify class in a system and whether one uses the same cells to identify class, as Finkel and Stump (, ) and Stump and Finkel () show. Recent informationtheoretic studies based on large databases with frequency information have also established the relevance of joint entropy measures (Bonami and Beniamine ; Bonami and Luís ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,    One of the most dangerous traps in any of the more complex branches of science (by no means absent even in the simplest branch, physics) is that of confusing one’s machinery of analysis with one’s object of analysis. One version of this is pandemic in linguistic theory today: almost all theorists take morphophonemes (by one or another name) to be things  a language rather than merely part of our equipment for the analysis and description  the language. A correct principal-parts-and-paradigms statement and a correct morphophoneme-and-rule statement subsume the same actual facts of alternation, the former more directly, the latter more succinctly.

Although formalized in information-theoretic terms, the notion of morphological information invoked here captures the traditional intuition that Matthews (: ) expresses as ‘the general insight . . . that one inflection tends to predict another’. This notion is largely absent from morphological traditions that represent only grammatical information, such as case, number, and gender. Part of the problem lies in the fact that predictive value is not a property of a form in the same way that, say, case is and hence cannot readily be expressed as a ‘feature’, even if one accepts the use of diacritic features for expressing notions like class affiliation.

. C 

.................................................................................................................................. The usefulness of information theory for formalizing traditional WP models confirms the linguistic relevance of information theory, as initially suggested by Descriptivists such as Hockett (, ) and Harris (, ).8 There are also a number of more general consequences of recognizing implicational relations as the cornerstone of WP models. This immediately avoids the need to impose a uniform analysis on all languages at the level of units. From the outset of the rejuvenated WP tradition, it was clear that many languages do not conform to the ‘agglutinative ideal’ of a morpheme-based model and that at least some depart quite radically from that ideal. Following a classic demonstration of the non-biunique patterning of ‘form’ and ‘meaning’ in Latin verbs, Matthews () observes: One motive for the post-Bloomfieldian model consisted, that is to say, in a genuinely factual assertion about language: namely, that there is some sort of matching between minimal ‘sames’ of ‘form’ (morphs) and ‘meaning’ (morphemes). Qua factual assertion this has subsequently proved false: for certain languages, such as Latin, the correspondence which was envisaged apparently does not exist. . . . One is bound to suspect, in the light of such a conclusion, that the model is in some sense wrong. (Matthews : )

An implicational model also provides a framework for incorporating morphemes alongside the patterns that Aronoff () terms ‘morphomic’. A morphome is simply a unit of predictive value. The patterns that Matthews () terms ‘Priscianic’ or ‘parasitic’ sanction 8

Harris’ perspective has had a more direct influence on works such as Goldsmith (a) and Pereira () and on statistical approaches to segmentation.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



predictions about the shape of one form based on the shape of another (as do the morphomic patterns discussed in Maiden ). For any Verb, however irregular it may be in other respects, the Present Infinitive always predicts the Imperfect Subjunctive. For the Verb ‘to flower’, florere ! florerem; for the irregular Verb ‘to be’, esse ! essem, and so forth without exception. (Matthews : )

From an implicational perspective, morphemes can be regarded as special cases of morphomes that encapsulate a biunique implication between properties and forms. Finally, an implicational interpretation of classical WP models offers a novel perspective on morphological variation. One of the most striking things about morphology is how much it appears to vary across languages. Languages may have many, few, or even no inflection classes, paradigms may have many cells or few, the forms that realize individual cells may vary widely or be relatively invariant, and so on. Languages may even lack morphology altogether, which has encouraged a view which, in its most provocative form, holds that morphology is somehow ‘unnatural’ or even a ‘pathology of language’ (Aronoff : ). However, much of the problematic variation involves aspects of morphological systems that are largely irrelevant to the acquisition or use of language by human speakers. A speaker does not need to determine the exact number of inflection classes in a language, provided that the patterns within inflectional paradigms provide a reasonably secure analogical base for deducing new forms. The fact that certain formatives may be unambiguous while others have a wider range of functions and meanings also poses no great difficulties if these occur in larger wordforms or constructions with stable grammatical properties. Hence, there is no reason why the pressures imposed by language acquisition and use should mould languages in ways that produce clear answers to questions concerning the number of inflection classes in a language, the meaning of specific formatives, or which determine an unambiguous segmentation of forms into stems and exponents and associate discrete meanings with these parts. From a contemporary WP perspective, these questions, and many of the other kinds of questions that tend to vex theoretical studies, are either of mainly pedagogical interest (numbers of classes or sizes of principal part inventories) or else are artefacts of methods of analysis or schemes of classification (meanings of formatives or ‘correct’ segmentations). Consequently, these issues can be seen to fall within what one might call ‘theoretical lexicography’ rather than within the study of linguistic morphology per se. Since languages do not develop in response to the demands imposed by lexicography, there is no reason to expect that these properties will be broadly similar across languages. In contrast, speakers of all languages do face what Ackerman, Blevins, and Malouf () call the ‘Paradigm Cell Filling Problem’, the task of producing, deducing, or interpreting new forms of an item, based on exposure to other forms of the item. Hence, a traditional model would lead one to expect that the difficulty of this task would fall within a fairly circumscribed range. Informationtheoretic notions provide the tools to measure the difficulty of predicting paradigms from subsets of their forms, and a research question that is being actively pursued in the current literature explores what Ackerman and Malouf () term ‘the Low Conditional Entropy Conjecture’, namely that the difficulty of this task is in fact relatively low and largely

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ,  ,   

independent of the properties that give rise to apparently extreme morphological variation. In effect, this different perspective on the organization and learnability of morphological systems has begun to explore what Hockett (: ) anticipated would be a benefit of developing WP approaches, namely, that there would be a net gain in realism, for the student of the language would now be required to produce new forms in exactly the way the native user of the language produces or recognizes them—by analogy. (Hockett : )

In line with this, the revival of old ideas and insights associated with earlier exemplificationbased WP approaches to morphology are animating and altering modern perspectives on ‘realism’ with respect to morphological systems and their organization. This arises from the use of quantitative techniques, including information-theoretic measures as well as Bayesian and discriminative learning models (Seyfarth, Ackerman, and Malouf ; Chater et al. ; Sims ; Blevins, Milin, and Ramscar ; Ramscar et al. ) to investigate phenomena and patterns that were less well delineated and understood without these tools. The adoption of this perspective and these methodologies enfolds the study of linguistic morphology within the fertile research programme in the developmental sciences that explores and explains phenomena in terms of the dynamics of interdependencies within complex adaptive systems.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

   ......................................................................................................................

 

. B

.................................................................................................................................. T central architectural feature of Paradigm Function Morphology (PFM) comes not from morphology but from formal semantics. In Fregean approaches to semantics such as Montague grammar, the definition of a complex linguistic expression’s semantics involves the interaction of a variety of functions. The English noun leaf may serve as the argument of a function F whose application to leaf gives the intension ^leaf 0 ; this is itself a function that applies to the pairing hw, ti of a possible world w and time t to yield the extension leaf 0 , yet another function that applies to an individual i to yield the value ‘true’ just in case i is a leaf in w at t. Consider now the English lexeme . This may be paired in syntax with the morphosyntactic property set {noun plural}; the pairing h, {noun plural}i is also a schematic representation of one of the cells in ’s inflectional paradigm. Such cells are the domain of the language’s paradigm function PF, whose application to a given cell yields the inflectional realization of that cell: PF applies to h, {noun plural}i to produce the realization hleaves, {noun plural}i. The evaluation of PF(h, {noun plural}i) in turn depends on two functions, one associating h, {noun plural}i with the stem /liv/, the other realizing the property ‘plural’ through the suffixation of /z/ to that stem. Just as the Fregean definition of a language’s semantics involves identifying the properties and interaction of a variety of functions, so, too, does the definition of a language’s inflectional morphology in PFM. PFM affords a precise but flexible way of defining complex inflectional systems. It was first proposed by Stump (), and has since been applied in the analysis of a wide range of inflectional phenomena. Key publications include Stump , , a, b, , , , , a, b, c, b, , , , , , a, b, , a b, c; Gazdar ; Spencer b, b, c, a, b; Ackerman and Stump ; Luís and Spencer a, b; Sadler and Nordlinger ; Stewart and Stump ; Bonami and Boyé , ; Léonard and Kihm ; Ackerman, Stump, and Webelhuth ; Spencer and Stump ; Bonami ; Bonami and Samvelian .

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Since , a number of variants of PFM have been proposed and developed (see especially Stump , a, and the above-mentioned work by Bonami, Bonami and Boyé, and Spencer); these are nevertheless united by the leading idea that the definition of a language’s inflectional morphology is the definition of a complex system of functions relating lexemes, stems, paradigm cells, and the word forms that realize them. In all of its versions, PFM is an inferential-realizational theory of morphology. In the typology introduced by Stump (), theories of inflectional morphology can be classified as either lexical or inferential and, independently, as either incremental or realizational. In a lexical theory, a language’s individual inflectional morphemes are listed in its lexicon, from which they are inserted into the hierarchical structures generated by its syntax;1 by contrast, an inferential theory dispenses with the lexical listing of inflectional morphemes, assuming instead that complex inflected forms are defined independently of the syntax, by a system of rules that induce complex words and stems from simpler stems. In an incremental theory, a word form acquires its morphosyntactic properties cumulatively, through a summing up of the properties associated with its individual inflectional exponents; by contrast, a realizational theory assumes that a word form’s morphosyntactic property set is specified in the paradigm cell that it realizes and that this set is what induces the appropriate inflectional markings for that word form. These cross-cutting typological dimensions entail four logically possible kinds of theories—lexical-incremental, inferentialincremental, lexical-realizational, and inferential-realizational; theories of all four kinds have been proposed in the literature. Stump () argues that lexical theories and incremental theories fail to account for various inflectional phenomena; these include (i) the underdetermination of a word form’s morphosyntactic properties by its individual inflectional exponents, the widespread incidence of (ii) nonconcatenative inflection and (iii) extended (or multiple) exponence, and (iv) the lack of compelling evidence that an inflected word form’s internal structure is anything more than its phonological/prosodic structure. This last point, compellingly asserted by Janda () and Anderson (), deserves some comment. In syntax, there are several different kinds of evidence that phrases and sentences possess hierarchical grammatical structure. For example, only constituents move, and movement is always to a hierarchically definable position; most kinds of deletion or ellipsis target constituents; the binding relation is defined with reference to hierarchical structure; the distribution of parentheticals is arguably restricted by position relative to a sentence’s hierarchical structure (Chomsky : ); and so on. Compounding aside (Anderson : ch. ), none of these kinds of evidence has a direct analogue in the domain of word structure. It is sometimes argued that the parts of a word do exhibit semantic scope relations, but this is not a compelling argument, for two reasons. First, the fact that the interpretation of a word like remislabel is one in which the iteration expressed by re- has scope over the meaning of mislabel does not have to be seen as evidence that remislabel has the hierarchical structure [ re [ mis [ label ]]]; instead, it can be seen as evidence that because the lexeme  derives from the lexeme , the meaning of  is a function of the meaning of —specifically, it is the application of an iteration operator to the meaning of . Nothing about this latter

1

Note that a lexical theory is not at all the same thing as a lexicalist theory; see Montermini (Chapter  this volume) and O’Neill ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



explanation requires remislabel to have any internal hierarchical structure other than its phonological/prosodic structure. Second, the notion that a word form is morphotactically isomorphic to its semantic structure is directly disconfirmed by a vast array of empirical evidence. On one hand, the morphology of the Sanskrit passive form dṛś‑ya-nte [see-.-] ‘they are seen’ seems to be morphotactically isomorphic to its semantics, since the subject-agreement suffix is positioned peripherally to (i.e. has ‘wider scope’ than) the passive suffix and expresses concord with the verb’s logical object. Yet, the Latin translation of dṛśyante has exactly the opposite morphotactic arrangement, contrary to the claimed isomorphism of morphology and semantics: vide-nt-ur [see--] ‘they are seen’. (See Chapter  of this volume for further discussion of the lack of isomorphism between a word form’s morphology and its semantics.) As an inferential-realizational theory (i.e. being neither lexical nor incremental), PFM is compatible with the underdetermination of a word form’s morphosyntactic properties by its individual inflectional exponents, with the phenomena of nonconcatenative inflection and extended exponence, and with the fact that the representation of a noncompound word form need not possess hierarchical grammatical structure.

. B 

.................................................................................................................................. The most complete articulations of the principles of PFM are those of Stump () and (a); I will refer to the  version of PFM as PFM and to the  version as PFM. Here, I shall focus on the more current version, but a brief description of PFM will provide a useful starting point. In this section, I give a compact presentation of the formal properties of PFM and PFM and discuss some of the phenomena that favor the elaboration of PFM as PFM. In §., I present an illustrative example of the workings of PFM. I then discuss the implications of PFM for the analysis of derivational morphology (§.) and for understanding the interfaces of morphology with other grammatical components (§.), concluding with some projections for future research (§.).

.. PFM1 In PFM, the  of a lexeme L is a set of cells, where each cell is the pairing hR, σi of L’s root R with a complete and well-formed set σ of morphosyntactic properties with which L may be associated. The definition of a language’s inflectional morphology is equated with the definition of its   PF, a function that applies to each paradigm cell hR, σi to give its realization; in English, for example, PF(hwalk, {finite past}i) = hwalked, {finite past}i. In cases of outright suppletion, the value of PF for some cell must be lexically listed, as with PF(hgo, {finite past}i) = hwent, {finite past}i. Generally, though, the value of PF is determined by specific  . Realization rules are of two kinds:    and    (Zwicky a). A rule of exponence specifies the morphological marking by which a given property or set of properties is realized. In English, for example, rule () specifies the suffix ‑s (/z/) as the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

exponent of the property set { sg prs ind}. This rule is applicable to a hroot, property seti pairing hR, σi just in case R is a verb root and { sg prs ind} is a subset of σ; in that case, the result of applying () to hR, σi is the pairing hRs, σi. ()

XV, { sg prs ind} ! Xs

A rule of referral stipulates that a particular property set is expressed by the morphology properly associated with a distinct property set. Thus, an English verb’s past participle may be distinct from its finite past-tense form (as in the case of proved/proven, sang/sung), but often the past participle takes on the form of its finite past-tense counterpart (as in the case of moved/moved, clung/clung). This relation is expressed by rule ().2 This rule is applicable to the pairing hZ, σi just in case Z is a verb and {pst ptcp} is a subset of σ; in that case, the result of applying () to hZ, σi is the pairing hZ0 , σi, where hZ0 , {fin pst}i is the realization of hZ, {fin pst}i. Note that the application of () to hZ, σi to produce hZ0 , σi does not change the property set σ. ()

XV, {pst ptcp} ! {fin pst}

Realization rules are grouped into blocks such that (i) rules belonging to the same block are mutually exclusive in their application and (ii) the order in which rules in different blocks apply is determined by the ordering relation among their respective blocks (Anderson ). If two rules in the same block compete to apply to some pairing hZ, σi, it is the narrower rule that prevails. Suppose, for example, that the English rule introducing the past participial suffix ‑en is formulated as in (), where ‘class n’ represents the class of verbs to which this rule applies; in that case, () and () compete to apply to the pairing heat, {pst ptcp}i, and because the domain of () (= class n verbs) is narrower than the domain of () (= all verbs), () prevails, defining the realization heaten, {pst ptcp}i. ()

X[V, class n], {pst ptcp} ! Xen

A language’s blocks of realization rules participate in defining the default value of that language’s paradigm function. For example, a hypothetical language’s paradigm function PF might have the value defined in (), where the notation [Block n : hZ, σi] represents the result of applying the narrowest applicable rule in Block n to hZ, σi.3 ()

PF(hR, σi) = [Block  : [Block  : [Block  : hR, σi]]]

The definition of a language’s paradigm function PF presupposes that each lexeme L in the language has a paradigm—a set of cells each of which is the pairing of L’s root with a complete and well-formed set of morphosyntactic properties. Paradigm cells constitute the domain of PF (the arguments to which it applies), and their inflectional realizations 2

The precise capabilities that should be attributed to rules of referral have been a matter of debate; but as will be seen in §.., rules of referral may be dispensed with in PFM. 3 Expression () should be seen as a clause in the definition of PF. This may be a default clause in the sense that it may be overridden by other, more specific, clauses in the definition.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



constitute its range (the values that it returns for those arguments): for each paradigm cell hR, σi, the value of PF(hR, σi) is either lexically listed or is a default value determined by the definition of PF; in either case, PF(hR, σi) is the realization of hR, σi.4 The phenomenon of underdetermination is not problematic for PFM; this phenomenon simply reflects the possibility that in the realization of a cell hR, σi, one or more of the properties in σ may fail to receive any expression by any realization rule. Nonconcatenative morphology is not a problem, since the operation specified in the definition of a realization rule may or may not be concatenative. Extended exponence simply represents the possibility that in the realization of a cell hR, σi, one or more of the properties in σ may receive expression by realization rules in more than one block. Finally, the form defined by a realization rule is in general simply a phonological representation, lacking any hierarchical structure that is purely grammatical in motivation. There are, however, good reasons for assuming a richer theory of inflectional morphology; these have motivated the newer version of PFM.

.. PFM2 PFM assumes the Cell Interface Hypothesis (). PFM is an extension of PFM that is meant to capture generalization (), which entails a weaker version of the Cell Interface Hypothesis. The innovation by which PFM captures generalization () is the set of assumptions in (). ()

Cell Interface Hypothesis The morphosyntactic property set that determines a word form w’s syntax and semantics is identical to the property set that determines w’s inflectional form.

()

While the Cell Interface Hypothesis holds true in the canonical case, there are many kinds of deviations from this canonical pattern.

()

Fundamental assumptions of PFM a. For each lexeme L in a language ℒ, there is an inventory ΣL of morphosyntactic property sets such that if σ ∈ ΣL, L may be associated with a syntactic node bearing σ and that association licenses the insertion of one of L’s word forms into that node.5 Example: Because the property set {comparative} ∈ ΣTALL, the adjectival lexeme  may be associated with an adjective node bearing {comparative}; this association licenses the insertion of a word form realizing  into that node.

4 The nature of paradigm functions is complicated by the phenomenon of overabundance (Thornton )—the fact that distinct word forms may compete to realize the same paradigm cell, as in the case of dreamed/dreamt. This phenomenon suggests that a paradigm function should in fact be seen as a paradigm relation such that for a given paradigm cell hR, σi, PF(hR, σi) may have more than one value; alternatively, a paradigm function might be seen as a function whose application to a cell has a set of realizations as its value. See Bonami and Boyé  for discussion. 5 By default, the property sets constituting ΣL are the same for all lexemes belonging to the same category as L.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  b. For each stem Z in a language ℒ, there is an inventory TZ of property sets such that if τ ∈ TZ, the inflectional morphology of ℒ defines a realization for the pairing hZ, τi. Example: Because {comparative} ∈ Ttall, the inflectional morphology of English defines a realization for the pairing htall, {comparative}i. c. Canonically, a lexeme L in a language ℒ has a single stem Z and ΣL = TZ; in such cases, the association of L with a syntactic node bearing the property set σ ∈ ΣL licenses the insertion of w into that node iff hw, σi is the realization that the inflectional morphology of ℒ defines for the pairing hZ, σi. Example: The lexeme  is canonical because it has a single stem tall and ΣTALL = Ttall; taller is therefore insertable into a node bearing the property set {comparative} because htaller, {comparative}i is the realization that the inflectional morphology of English defines for htall, {comparative}i. d. There are nevertheless many ways in which ΣL and TZ may differ from one another, so that the inflectional morphology of ℒ must not simply define the realization of the pairing hZ, τi for each stem Z in ℒ and each τ ∈ TZ; it must also define, for each word form w, how the lexeme L and property set σ ∈ ΣL that determine w’s lexical insertion in the syntax of ℒ relate to the stem Z and property set τ ∈ TZ that determine w’s inflectional form in the morphology of ℒ—even in those cases in which σ 6¼ τ.

Given these assumptions, PFM defines three kinds of paradigms: the   of a lexeme L is the set of pairings {hL, σi : σ ∈ ΣL}; the   of a stem Z is the set of pairings {hZ, τi : τ ∈ TZ}; and the   of a stem Z is the set of hword form, property seti pairings that realize the cells in Z’s form paradigm. Each member of {hL, σi : σ ∈ ΣL} is a  ; each member of {hZ, τi : τ ∈ TZ}, a  ; and each member of a stem’s realized paradigm is a  . Thus, h, {comparative}i is a content cell in the content paradigm of ; htall, {comparative}i is a form cell in the form paradigm of tall; and htaller, {comparative}i is a realized cell in the realized paradigm of tall. In general, a content cell hL, σi corresponds to a single form cell hZ, τi such that the realization of hL, σi is that of hZ, τi: PF(hL, σi) = PF(hZ, τi). In that case, hZ, τi is the   of hL, σi. The relationship between content cells and their form correspondents is represented as a function Corr such that for any content cell hL, σi, the value of Corr(hL, σi) is the form correspondent of hL, σi; thus, Corr(h, {comparative}i) = htall, {comparative}i. For a given content cell hL, σi, the value of Corr(hL, σi) depends on two additional functions, the function Stem and a property mapping pm, as in (). ()

For any content cell hL, σi, there is a property mapping pm such that Corr(hL, σi) = hStem(hL, σi), pm(σ)i.

In the canonical case, a lexeme L has a single stem Z such that for any cell hL, σi in L’s content paradigm, Stem(hL, σi) = Z and pm(σ) = σ. But a lexeme L’s stem may also vary: given two distinct cells hL, σ₁i, hL, σ₂i in the content paradigm of L, it may be that Stem(hL, σ₁i) 6¼ Stem(hL, σ₂i). In Latin, for example, the verb  ‘carry’ has a stem fer- used in imperfective forms but a different stem tul- used in perfective forms. Likewise,

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



the property mapping pm appropriate for the evaluation of Corr(hL, σi) may be such that pm(σ) 6¼ σ. For example, pm(σ) is a subset of σ in the definition of one kind of syncretism (as in the Sanskrit example in §...); pm(σ) is a superset of σ if pm introduces a morphomic property that is relevant for a word form’s inflectional realization but not for its syntax or semantics (as in the English example in §...); and in still other cases, there is only a partial overlap between the properties belonging to pm(σ) and those belonging to σ (as, for example, in the instance of Old Norse deponency in §...). The realization of form cells in PFM involves blocks of rules of exponence exactly like those of PFM. In general, the realization of a form cell hZ, τi is the value of PF(hZ, τi), where the paradigm function PF is defined in terms of rule blocks, exactly as in PFM. When PF applies to a content cell hL, σi, the resulting value is always as in (); that is, a content cell’s realization is that of its form correspondent. ()

PF(hL, σi) = PF(Corr(hL, σi))

The work done by rules of referral in PFM is taken over by other functions in PFM; for example, the rule of referral in () might be replaced with clause () in the definition of the default property mapping function for English verbs. This clause interacts with the English Corr to cause a lexeme’s past participial content cell to share the form correspondent of its past-tense cell in the default case. () pm({pst ptcp}) = {fin pst} The architecture of PFM may therefore be schematically represented as in Figure .. As Figure . shows, the definition of a language’s inflectional morphology in PFM depends on the definition of several functions: • The paradigm function PF applies to a content cell or form cell to give the corresponding realized cell. Where c is a content cell whose form correspondent is the form cell f, PF(c) = PF(f). Unless the value of PF(f) for some form cell f is stipulated lexically, the value of PF(f) is defined in terms of blocks of realization rules.

Where lexeme L occupies a syntactic node specified with the property set σ, it is realized as Xʹ.

⟨L, σ〉 is a content cell

morphology proper PF(⟨L, σ⟩)

= PF(Corr(⟨L, σ⟩)) = PF(⟨X, pm(σ)⟩) = [ II : [ I : ⟨X, τ⟩]] =

X = Stem(⟨L, σ⟩)

⟨X, τ⟩ is a form cell; τ = pm(σ)

⟨Xʹ, τ⟩

⟨Xʹ, τ⟩ is a realized cell

 .. The interface of content paradigms and form paradigms in PFM

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

• The form-correspondence function Corr applies to a content cell to give its form correspondent; the definition of Corr depends on the definitions of Stem and the appropriate property mapping. • The Stem function applies to a content cell to yield the stem serving in the realization of that cell. • A property mapping pm applies to the morphosyntactic property set of a content cell c to yield the property set of c’s form correspondent. • A realization rule applies to a hform, property seti pairing to yield a hform, property seti pairing. A realization rule may compete with other members of the rule block to which it belongs; such competition is resolved in favor of the rule with the narrowest domain of application.

.. Deviations from canonical inflection in PFM2 Canonically, the word forms realizing a lexeme L are a simple reflection of their content: L has a single stem Z such that for any cell hL, σi in L’s content paradigm, Stem(hL, σi) = Z and pm(σ) = σ; as a consequence, the morphosyntactic distinctions that determine the syntax and semantics of L’s word forms are exactly those that determine the inflectional realization of those word forms, in accordance with the Cell Interface Hypothesis (). But much of the definition of a language’s inflectional morphology is the definition of deviations from this canonical pattern. PFM affords a precise understanding of such deviations. Consider four such deviations: defectiveness, syncretism, inflection classes, and deponency.

... Defectiveness PFM predicts that defectiveness can be of two types. On one hand, a lexeme L’s inflection might be defective because its content paradigm has fewer cells than are typically found in content paradigms of lexemes belonging to the same category; that is, L’s content paradigm might make fewer morphosyntactic distinctions than expected. On the other hand, a lexeme L’s inflection might be defective because not all of its content cells have form correspondents. The French verb  ‘be necessary’ is a lexeme of the first type: among its finite forms, only third person singular forms appear; but this is simply because  takes an expletive subject rather than a personal subject. Thus, the gaps in the content paradigm of  are ultimately motivated by the fact that its expletive subject never varies in person or number. By contrast, the verb  ‘milk’ lacks imperfect subjunctive and preterite forms, and in this case, there is neither syntactic nor semantic motivation for the gaps in its paradigm. There is, however, a morphological motivation. As Boyé () shows, French conjugational paradigms are built upon a systematic inventory of stems, one of which is employed in both the imperfect subjunctive and the preterite. Thus, the defectiveness of  follows from the assumption that it lacks this particular stem in its inventory. That is, the Stem function fails to deliver a value when it applies to cells in the content paradigm of  that are either imperfect subjunctive or preterite. Thus, defectiveness is a property of the form paradigm of  rather than of its content paradigm.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



... Syncretism PFM predicts that syncretism may arise in two ways. On one hand, syncretism may reflect nothing more than a kind of poverty in a language’s system of realization rules, which may simply fail to realize distinctions among cells belonging to a particular natural class. On the other hand, syncretism may reflect a systematic tendency for content cells differing in a particular way to share the same form correspondent. In the inflection of Sanskrit a-stem adjectives, for example, neuter forms generally inflect differently from masculine forms in the direct cases (nominative, vocative, and accusative). In the remaining, oblique cases of an a-stem adjective (i.e. in the instrumental, dative, ablative, genitive, and locative of all three numbers), neuter and masculine forms are alike. The inflection of  ‘dear’ in Table . illustrates this pattern. The gender syncretism of the oblique-case forms in Table . simply reflects the fact that the rules of exponence that define an a-stem adjective’s oblique forms are insensitive to the distinction between neuter and masculine. That is, for each oblique case α and each number β, the content cells h, {n α β}i and h, {m α β}i form a natural class whose inflection is determined by the same rule.6 By contrast, the syncretic realization of the instrumental, dative, and ablative cells in the dual, that of the genitive and locative cells in the dual, and that of the dative and ablative cells in the plural cannot merely be attributed to poverty in the system of rules of exponence, since none of these three groups of cells constitutes a natural class. Here (and in the inflection of all Sanskrit nominals), certain content cells share the same form correspondent; that is, the definition of the property mapping relevant for the inflection of Sanskrit nominals entails the equations in (). ()

For any gender α, a. pm({α ins du}) = pm({α dat du}) = pm({α abl du}) b. pm({α gen du}) = pm({α loc du}) c. pm({α dat pl}) = pm({α abl pl})

Table .. Paradigm of Sanskrit  ‘dear’

Nom Voc Acc Ins Dat Abl Gen Loc

Singular m n priyah. priyam priya priya priyam priyam priyen.a priyāya priyāt priyasya priye

Dual m n priyau priye priyau priye priyau priye priyābhyām priyābhyām priyābhyām priyayoh. priyayoh.

Plural m n priyāh. priyān.i priyāh. priyān.i priyān priyān.i priyaih. priyebhyah. priyebhyah. priyān.ām priyes.u

6 The fact that each of these classes lacks a feminine cell is a consequence of the fact that a-stem adjectives have no feminine inflection. Feminine forms with the meaning ‘dear’ are based on ¯, an ā-stem lexeme derived from .

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

... Inflection classes Inflection classes are sometimes seen as classes of lexemes; but the phenomenon of heteroclisis—the complementary participation of multiple inflection classes in the inflection of a single lexeme (Stump b)—shows that they are instead classes of stems.7 Inflection-class membership is morphomic (Aronoff ): syntax and semantics are utterly blind to these distinctions, whose grammatical importance is confined to the definition of a language’s morphology. Accordingly, PFM does not include inflectionclass properties in a lexeme’s content paradigm, but only introduces them in the definition of property mappings, guaranteeing that they will be restricted to form paradigms. Thus, in a language with inflection-class distinctions, content cell hL, σi ordinarily has hStem(hL, σi), pmα(σ)i as its form correspondent, where Stem(hL, σi) belongs to inflection class α and α ∈ pmα(σ). In English, for example, the content cell (a) has the form correspondent (b) with realization (c), while the content cell (a) has the form correspondent (b) with realization (c). () a. h, {fin pst}i b. hStem(h, {fin pst}i), pmweak({fin pst})i ( = hlean, {weak fin pst}i) c. hleaned, {weak fin pst}i () a. h, {fin pst}i b. hStem(h, {fin pst}i), pmt-class({fin pst})i ( = hmean, {t-class fin pst}i) c. hmeant, {t-class fin pst}i

... Deponency PFM accommodates the fact that from one lexeme to another, there may be a skewing of the relation between content cells and form cells. In Old Norse, for example, a present indicative cell hL, σi in a verb’s content paradigm ordinarily has hStem(hL, σi), pmα(σ)i as its form correspondent, where Stem(hL, σi) belongs to conjugation α and α ∈ pmα(σ); but in the particular case in which L is a preterite-present verb, a present indicative cell hL, σi in L’s content paradigm has hStem(hL, σi), pm0 α(σ)i as its form correspondent, where pm0 α(σ) is the result of substituting the property ‘past’ for the property ‘present’ in pmα(σ). This pattern of form correspondence guarantees that in the present indicative, the (strong) stem of a preterite-present verb such as  ‘will’ exhibits the morphology typical of an ordinary strong verb’s past-tense forms, as in Table .. In §., I exemplify the use of PFM in the systematic analysis of a set of inflectional data that incorporates several deviations from canonical inflection.

7

This does not, of course, mean that there are no grammatically significant classes of lexemes. Certainly there are, for example the class of deponent verbs in Latin. But these lexeme classes are not inflection classes. Thus, in Latin, the stems of deponent verbs belong to the same range of conjugation classes as the stems of nondeponent verbs; for instance, the present stem of the deponent verb ¯¯ ‘urge’ belongs to the st conjugation, that of -¯ ‘confess’ to the nd conjugation, that of ¯ ‘follow’ to the rd conjugation, that of ¯ ‘suffer’ to the ‑io subclass of the rd conjugation, and that of ¯¯¯ ‘work at’ to the th conjugation.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



Table .. Indicative forms of three Old Norse verbs fara ‘go’ (strong) Present 1sg

Past

munu ‘will’ (preterite-present)

dœma ‘judge’ (weak)

fęr

mun

dœm-i

2sg

fęr-r

mun-t

dœm-ir

3sg

fęr-r

mun

dœm-ir

1pl

f r-um

mun-um

dœm-um

2pl

far-ið

mun-uð

dœm-ið

3pl

far-a

mun-u

dœm-a

1sg

fór

mun-da

dœm-da

2sg

fór-t

mun-dir

dœm-dir

3sg

fór

mun-di

dœm-di

1pl

fór-um

mun-dum

dœm-dum

2pl

fór-uð

mun-duð

dœm-duð

3pl

fór-u

mun-du

dœm-du

Note: Shaded forms highlight the deponency of preterite-present verbs.

. A       PFM: B-  

.................................................................................................................................. In Bena-bena (Gorokan, Papua New Guinea), verbal lexemes fall into three inflection classes, exemplified by the verbs  ‘hit’,  ‘go’, and  ‘pierce’ in Table ..8 Each verb instantiates the schematic content paradigm in Table .; this paradigm has twenty-seven cells, one for each member of the ternary Cartesian product {sg, du, pl}  {, , }  {prs, pst, fut}. (The content paradigms of the three verbal lexemes in Table . are simply the respective results of substituting , , and  for L in Table ..) The forms in Table . suggest that each verb in Bena-bena has three stems, here labeled stems (i) [ho-, bu-, fu-], (ii) [ha-, ba-, fi-], and (iii) [ha-, bi-, fi-]. In all three conjugation classes, the singular present forms exhibit stem (i) in the first person, stem (ii) in the second person, and stem (iii) in the third person; in Classes A and C, stems (ii) and (iii) are identical [ha-, fi-]. In all three conjugation classes, stem (i) is used in first-person presenttense forms; stem (iii) in third-person singular present-tense forms as well as in all futuretense forms; and stem (ii) is used elsewhere in the present tense. The precise pattern of stem

8

The analysis developed here is an outgrowth of that presented by Stump ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Table .. Synthetic forms of three verbal lexemes in Bena-bena  ‘hit’, Class A Prs Sg  hobe  hane  habe

 ‘go’, Class B

Pst

Fut

Prs

Pst

Fut

hoɁohube hoɁahane hoɁehibe

halube halane halibe

bube bane bibe

buɁohube bilube buɁahane bilane buɁehibe bilibe

 ‘pierce’, Class C Prs

Pst

Fut

fube fine fibe

fiɁohube fiɁahane fiɁehibe

filube filane filibe

Du  hoɁibe hoɁohuɁibe haluɁibe  haɁibe hoɁehaɁibe halaɁibe  haɁibe hoɁehaɁibe halaɁibe

buɁibe buɁohuɁibe biluɁibe baɁibe biɁehaɁibe bilaɁibe baɁibe biɁehaɁibe bilaɁibe

fuɁibe fiɁohuɁibe filuɁibe fiɁibe fiɁehaɁibe filaɁibe fiɁibe fiɁehaɁibe filaɁibe

Pl  hone  habe  habe

bune babe babe

fune fibe fibe

hoɁohune hoɁehabe hoɁehabe

halune halabe halabe

buɁohune bilune biɁehabe bilabe biɁehabe bilabe

fiɁohune fiɁehabe fiɁehabe

filune filabe filabe

Source: Young .

Table .. Schematic content paradigm for Bena-bena verbs hL, { sg prs}i hL, { sg prs}i hL, { sg prs}i hL, { du prs}i hL, { du prs}i hL, { du prs}i hL, { pl prs}i hL, { pl prs}i hL, { pl prs}i

hL, { sg pst}i hL, { sg pst}i hL, { sg pst}i hL, { du pst}i hL, { du pst}i hL, { du pst}i hL, { pl pst}i hL, { pl pst}i hL, { pl pst}i

hL, { sg fut}i hL, { sg fut}i hL, { sg fut}i hL, { du fut}i hL, { du fut}i hL, { du fut}i hL, { pl fut}i hL, { pl fut}i hL, { pl fut}i

distribution among a verb’s past-tense forms depends on the conjugation class to which it belongs. By default, stem (iii) is used throughout the past tense. This default is overridden in Class A, where stem (i) is instead used throughout the past tense. In the past tense of Class B verbs, stem (i) rather than stem (iii) is used in forms that are first-person or singular. These distributional properties follow from the definition of the Bena-bena Stem function in (). ()

a. b. c. d. e. f. g.

Stem(h L, σ:{ prs}i) = L’s stem (i) Stem(h L, σ:{ sg prs}i) = L’s stem (iii) Stem(h L, σ:{prs}i) = L’s stem (ii) Stem(h L, σ:{fut}i) = L’s stem (iii) Stem(h L, σ:{pst}i) = L’s stem (iii) If X = L’s stem (i) and belongs to Class B, then Stem(h L, σ:{|sg pst}i) = X If X = L’s stem (i) and belongs to Class A, then Stem(h L, σ:{pst}i) = X

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



Table .. Paradigm linkage for the presenttense cells of the Class A verb  ‘hit’ Content cells h, { sg prs}i h, { sg prs}i h, { sg prs}i h, { du prs}i h, { du prs}i h, { du prs}i h, { pl prs}i h, { pl prs}i h, { pl prs}i

Form correspondents ! ! ! ! ! ! ! ! !

hho, { sg prs}i hha, {– c prs}i hha, {– sg prs}i hho, { du prs}i

}

hha, {– du prs}i hho, { c prs}i

}

hha, {– pl prs}i

Although a verbal lexeme’s content paradigm has twenty-seven cells, the corresponding form paradigm of each verb has only twenty-one cells; this is because, in each tense, the second- and third-person forms are systematically syncretized in both the dual and the plural. Moreover, second-person singular forms and first-person plural forms constitute a morphomic category identified by the suffix ‑ne; number distinctions are apparently insignificant for the inflection of forms in this category, which is here indexed with a morphomic property c. These facts are accounted for by the definition of the Bena-bena property mapping pm defined in (). In this definition, the notation σ[p/q] represents the property set that is like σ except that property q appears in place of property p; when clauses in this definition compete, the clause with the narrowest domain of application prevails. ()

Where α ∈ {, }, a. pm(σ:{ sg}) = b. pm(σ:{ pl}) = c. pm(σ:{α}) = d. pm(σ) =

pm(σ[sg/c]) σ[pl/c] σ[α/–] σ

Given the definition of Stem in () and that of pm in (), the Bena-bena Corr function conforms to the definition in (). In accordance with this definition, the present-tense cells in the content paradigm of the Class A verbal lexeme  ‘hit’ have the form correspondents in Table .. In this way, the three verbs in Table . have the form paradigms in Table .. The realization of the form cells in Table . is effected by the rules of exponence in (). () Block :

a. b. c. d.

XV, { pst} XV, {– c pst} XV, {pst} XV, {fut}

! Xʔoh ! Xʔah ! Xʔeh ! Xl

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  Block : a. XV, {prs} Where α = pst or fut b. XV, { α} c. XV, {– sg α} d. XV, {} Block : XV, {du} Block : a. XV, {} b. XV, {c}

!X ! Xu ! Xi ! Xa ! XɁi ! Xbe ! Xne

As this analysis suggests, PFM ordinarily defines three paradigms for each lexeme and its associated stems: a content paradigm, a form paradigm, and a realized paradigm. The three paradigms defined for the Bena-bena verb  ‘hit’ and its stems ho and ha are spelled out in Table .. In instances of canonical inflection, the difference between a lexeme L’s content paradigm and its stem Z’s form paradigm is trivial: for each content cell hL, σi there is a form cell hZ, σi, and vice versa. This canonical pattern is, however, a rarity; in Table ., for example, the form correspondent of a content cell h, σi may have ho or ha as its stem (depending on the value of σ) and has a property set distinct from σ everywhere but in the first person of the singular and dual.

Table .. Form paradigms of three Bena-bena verbs  ‘hit’, Class A

 ‘go’, Class B

 ‘pierce’, Class C

hho, { sg prs}i hha, {– c prs}i hha, {– sg prs}i hho, { du prs}i hha, {– du prs}i hho, { c prs}i hha, {– pl prs}i

hbu, { sg prs}i hba, {– c prs}i hbi, {– sg prs}i hbu, { du prs}i hba, {– du prs}i hbu, { c prs}i hba, {– pl prs}i

hfu, { sg prs}i hfi, {– c prs}i hfi, {– sg prs}i hfu, { du prs}i hfi, {– du prs}i hfu, { c prs}i hfi, {– pl prs}i

hho, { sg pst}i hho, {– c pst}i hho, {– sg pst}i hho, { du pst}i hho, {– du pst}i hho, { c pst}i hho, {– pl pst}i

hbu, { sg pst}i hbu, {– c pst}i hbu, {– sg pst}i hbu, { du pst}i hbi, {– du pst}i hbu, { c pst}i hbi, {– pl pst}i

hfi, { sg pst}i hfi, {– c pst}i hfi, {– sg pst}i hfi, { du pst}i hfi, {– du pst}i hfi, { c pst}i hfi, {– pl pst}i

hha, { sg fut}i hha, {– c fut}i hha, {– sg fut}i hha, { du fut}i hha, {– du fut}i hha, { c fut}i hha, {– pl fut}i

hbi, { sg fut}i hbi, {– c fut}i hbi, {– sg fut}i hbi, { du fut}i hbi, {– du fut}i hbi, { c fut}i hbi, {– pl fut}i

hfi, { sg fut}i hfi, {– c fut}i hfi, {– sg fut}i hfi, { du fut}i hfi, {– du fut}i hfi, { c fut}i hfi, {– pl fut}i

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



Table .. The content paradigm of  ‘hit’ (Class A) and the form and realized paradigms of its stems ho and ha Content paradigm

Form paradigm

Realized paradigm

h, { sg prs}i h, { sg prs}i h, { sg prs}i

h, { sg pst}i h, { sg pst}i h, { sg pst}i

h, { sg fut}i h, { sg fut}i h, { sg fut}i

h, { du prs}i h, { du prs}i h, { du prs}i

h, { du pst}i h, { du pst}i h, { du pst}i

h, { du fut}i h, { du fut}i h, { du fut}i

h, { pl prs}i h, { pl prs}i h, { pl prs}i

h, { pl pst}i h, { pl pst}i h, { pl pst}i

h, { pl fut}i h, { pl fut}i h, { pl fut}i

hho, { sg prs}i hha, {– c prs}i hha, {– sg prs}i

hho, { sg pst}i hho, {– c pst}i hho, {– sg pst}i

hha, { sg fut}i hha, {– c fut}i hha, {– sg fut}i

hho, { du prs}i hha, {– du prs}i

hho, { du pst}i hho, {– du pst}i

hha, { du fut}i hha, {– du fut}i

hho, { c prs}i hha, {– pl prs}i

hho, { c pst}i hho, {– pl pst}i

hha, { c fut}i hha, {– pl fut}i

hhobe, { sg prs}i hhane, {– c prs}i hhabe, {– sg prs}i

hhoɁohube, { sg pst}i hhoɁahane, {– c pst}i hhoɁehibe, {– sg pst}i

hhalube, { sg fut}i hhalane, {– c fut}i hhalibe, {– sg fut}i

hhoɁibe, { du prs}i hhaɁibe, {– du prs}i

hhoɁohuɁibe, { du pst}i hhoɁehaɁibe, {– du pst}i

hhaluɁibe, { du fut}i hhalaɁibe, {– du fut}i

hhone, { c prs}i hhabe, {– pl prs}i

hhoɁohune, { c pst}i hhoɁehabe, {– pl pst}i

hhalune, { c fut}i hhalabe, {– pl fut}i

. B 

.................................................................................................................................. PFM is first and foremost a theory of inflectional morphology, but the question naturally arises whether the architectural principles that the theory assumes for inflection might not also be relevant to lexeme formation. Some properties of inflection are, of course, less obviously observable in the domain of lexeme formation, but this fact is not necessarily a sign that inflection and lexeme formation follow different definitional principles. If the inflectional morphology of a lexeme L is seen as the realization of systematic distinctions of content among the cells of L’s paradigm, one can likewise see at least some derivation as involving a similar sort of realization. For instance, one might assume that the intransitive verb i realizes the inchoative cell in a derivational paradigm whose other cells are realized by lexemes deriving from (or related in some fashion to) i, including the causative verb t, the passive potential adjective , the result noun , and so on. In this example, it is easy to see the cells in the derivational paradigm as manifestations of a systematic network of semantic contrasts that is recurrently

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

manifested among the cells of other derivational paradigms; the network in (a), for instance, is further embodied by the lexemes in (b). ()

a. inchoative / causative / passive potential / result b. i / t /  / 

Prima facie, however, there are numerous complications with this analogy between inflectional and derivational paradigms. Networks of semantic contrasts such as (a) are sometimes embodied by unrelated lexemes rather than by members of a single derivational family; thus, the inchoative  corresponds to the causative . There are sometimes accidental gaps in the embodiment of some network of contrasts; for example, the result noun , the passive potential adjective , and causative verb  lack any corresponding inchoative (*their theory falsified). The lexeme embodying a particular semantic contrast may have a meaning that does not quite conform, either instead of or in addition to a meaning that does; for instance, the adjective  most often means ‘worthy of quotation’ rather than simply ‘able to be quoted’, and  always means more than  (unsafe whether in solid or drinkable/*potable form). Cases of this latter sort raise the question of whether formation rather than meaning should be taken as the basis for postulating derivational paradigms. Consider, for example, the derivation of verbs in ‑ize. Although the meaning of a verb in ‑ize can in general be characterized as either inchoative or causative in character, the particular relation between that meaning and that of the base from which the verb derives is highly variable. While vaporize has the transparent meanings ‘turn to vapor’ or ‘cause to turn to vapor’, the other derivatives in ‑ize in () fail to exhibit this same transparency and diverge from it in a variety of ways. Indeed, the ‑ize derivative of the noun poster has several possible meanings, including ‘reproduce (something) as a poster’, ‘print or display (an image) using a restricted number of colors’, and ‘perform a spectacular play—usually a slam dunk—against (an opponent in basketball)’. Thus, derivatives in ‑ize constitute a coherent class in terms of morphological form, but they fail to embody any coherent semantic contrast with the bases from which they derive. ()

brutalize burglarize climatize hospitalize lionize memorize Mirandize scrutinize terrorize vaporize visualize womanize

‘behave brutally toward’ ‘commit burglary from (building) or against (someone)’ ‘prepare for use in a particular climate’ ‘admit to a hospital for treatment’ ‘treat (someone) as a celebrity’ ‘commit to memory’ ‘inform (someone) of one’s legal rights in an arrest’ ‘devote scrutiny to’ ‘provoke terror in (someone)’ ‘turn to vapor; cause to turn to vapor’ ‘form a mental image of ’ ‘philander’

Nevertheless, networks of semantic contrasts do seem to exist independently of the particular morphology used to express them. For example, the six nouns in ‑ics listed in

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



Table .. The derivational morphology of six domain nouns and the corresponding personal nouns ic mechanics athletics genetics mathematics gymnastics economics

e

mechanic * athlete * * * * * * * * *

icist

ician

?mechanician * * * geneticist * mathematician * * *

* *

Ø

ist

* * * * gymnast

?mechanist

*

* * * * economist

the first column of Table . all name domains of activity, and for each domain, there is a corresponding noun referring to a person who works or performs in that domain. What is striking is that although these corresponding personal nouns may be formed in a variety of ways, there nevertheless tends to be only a single corresponding personal noun for each domain noun. The relation between content-based patterns (such as the pattern exemplified by each row in Table .) and form-based patterns of derivation (such as the pattern shared by the verbs in ()) recalls the relation between content paradigms and form paradigms in PFM. Thus, one might think of a derivative lexeme as being situated not only with respect to a form-based pattern, but also—at least potentially—with respect to a content-based pattern. Content-based patterns and form-based patterns could thus be represented as nodes in an inheritance network in which a derived lexeme’s content and form would frequently involve relations of orthogonal multiple inheritance.

. I

.................................................................................................................................. PFM is an autonomous theory of morphology, in which no effort is made to derive the characteristics of morphology purely from principles and representations whose motivation is outside the domain of morphology. Even so, PFM assumes a rich configuration of interfaces between morphology and other domains of grammatical organization. Although PFM does not situate morphology in the lexicon, it does presuppose extensive interaction between a language’s morphology and its lexicon. Rules of morphology are assumed to serve two purposes with respect to the lexicon. On one hand, they allow new lexemes to be created and—at least potentially—to enter the lexicon. On the other hand, they express generalizations that make it possible to simplify the lexicon. The limiting case of such simplification would be a lexicon that is entirely nonredundant, from which all predictable information is excluded. Experimental evidence suggests that the mental lexicon does not instantiate this extreme, but nevertheless benefits to some extent from the economy afforded by predictive rules. See Jackendoff and Audring (Chapter  this volume) for discussion of this issue.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Because PFM was originally developed as a way of modeling inflectional systems, relative productivity in PFM has generally been modeled by the interaction of default rules with overrides, some of which take the form of narrower rules, and others, the form of lexical stipulations. For example, the most productive rule of past-tense inflection in English, that of weak verbs, has the very general formulation in (). This rule competes with and is overridden by the narrower rule in (), which applies to form cells such as those in (a) to produce the realized cells in (b). Rules () and () participate in the evaluation of the English paradigm function. For certain form cells, however, the value of the paradigm function is simply stipulated in the lexicon, as in (); such stipulations override any and all rules that might otherwise be applicable. ()

{fin pst}: X ! Xd

()

{t-class fin past}: X ! X0 t, where X0 arises from X through the laxing and lowering of its stem vowel.

()

a. hmean, {t-class fin pst}i, h feel, {t-class fin pst}i, hsleep, {t-class fin pst}i b. hmeant, {t-class fin pst}i, h felt, {t-class fin pst}i, hslept, {t-class fin pst}i

()

PF(hgo, {fin pst}i) = hwent, {fin pst}i

Morphology also has a significant interface with phonology. Naturally the forms defined by a language’s morphology are subject to the automatic processes of its phonology. Morphological operations are likewise frequently conditioned by the phonological properties of the formatives on which they operate; the reverse is also sometimes true, as in Sanskrit, where the generally automatic rules of sandhi are nevertheless suspended in the pronunciation of dual forms (whether nominal or verbal) that end with e, ī, or ū (rule .. in Pāṇini’s As:ṭādhyāyī). Moreover, a language’s morphology defines shape conditions (Zwicky b), allomorphy that is conditioned by a word form’s syntagmatic context (e.g. the adjectival alternation of beau chien / bel ami). The development of a realistic interface of inflectional morphology with syntax and semantics is one of the principal goals of PFM. Traditional morphemics suggested that the relation between a word’s content and its form is ideally isomorphic, but empirical evidence reveals how rarely such isomorphism is actually achieved or maintained and how varied the deviations from it may be. Morphology is a language’s most fundamental interface of content with form, and PFM is designed to make the nature of that interface as precisely delineable as possible.

. F 

.................................................................................................................................. Over twenty-five years, PFM has undergone gradual change, and it is likely to continue evolving as linguists acquire greater understanding of the nature of morphology. A number of important issues must still be addressed, two of which I shall mention here. The first of these concerns the nature of exponence—whether exponence should be subdivided into

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  



different types. Consider, for example, the sentences in () in Southern Sotho (NigerCongo, Lesotho). In both examples, the verb inflects for the noun class and number of both the subject and object arguments. What is striking is that the same affix that is used for subject concord in one sentence is also used for object concord in the other sentence. This same double usage is observable for nearly all of the verbal concord affixes. What distinguishes subject concord from object concord is position: in these examples, the subject prefix precedes the tense prefix, while the object prefix follows it. The issue therefore arises whether the concord affixes in these examples serve as intrinsic exponents of a particular noun class + number, but also serve as positional exponents of either subject or object properties according to their position? What sorts of economies would a distinction between intrinsic and positional exponence support in this and other languages? How would such an approach compare to alternative approaches to this phenomenon (e.g. those of Stump  and Crysmann and Bonami )? ()

a. ba‑tla‑bō‑bòna :{: }--:{: }-see ‘they (:) will see it (:)’

b. bō‑tla‑ba‑bòna :{: }--:{: }-see ‘it (:) will see them (:)’

A second key issue for future research concerns the role of rule conflation in the definition of a language’s morphotactics. Descriptive grammars often assume that an affix can itself be morphologically complex. Consider, for example, the inflectional paradigm of the adjective  ‘big’ in Noon (Niger-Congo, Senegal) shown in Table .. In the analysis of these forms proposed by Soukka (), the noun class markers (w-, f-, m-, k-, p-, j-, c-, t-, y-, ɓ-, j-, t-) serve four different functions: they join with a prefixal formative i- to form a morphologically complex prefix expressing the noun class and number of the noun that

Table .. The inflection of the Noon adjective  ‘big’

Nondiminutive Inanimate sg

Animate Diminutive Source: Soukka .

Noun class

Indefinite

. . . . . .

wi‑yak fi‑yak mi‑yak ki‑yak pi‑yak ji‑yak

Definite Location 

Location 

Location 

wi‑yak‑wii fi‑yak‑fii mi‑yak‑mii ki‑yak‑kii pi‑yak‑pii ji‑yak‑jii

wi‑yak‑wum fi‑yak‑fum mi‑yak‑mum ki‑yak‑kum pi‑yak‑pum ji‑yak‑jum

wi‑yak‑waa fi‑yak‑faa mi‑yak‑maa ki‑yak‑kaa pi‑yak‑paa ji‑yak‑jaa

pl

.– ci‑yak .– ti‑yak

ci‑yak‑cii ti‑yak‑tii

ci‑yak‑cum ti‑yak‑tum

ci‑yak‑caa ti‑yak‑taa

sg pl

yi‑yak ɓi‑yak

yi‑yak‑yii ɓi‑yak‑ɓii

yi‑yak‑yum ɓi‑yak‑ɓum

yi‑yak‑yaa ɓi‑yak‑ɓaa

sg pl

ji‑yak ti‑yak

ji‑yak‑jii ti‑yak‑tii

ji‑yak‑jum ti‑yak‑tum

ji‑yak‑jaa ti‑yak‑taa

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

the adjective modifies; and they join with three different suffixal formatives to produce morphologically complex suffixes indicating that the modified noun is definite and that it denotes something near the speaker (‑ii), near the addressee (‑um), or near neither speaker nor addressee (‑aa). Although descriptive linguists have often built upon the notion that affixes can themselves be morphologically complex, this idea has not enjoyed much favor among morphological theorists. Would incorporating a concept of rule conflation into morphological theory make it possible to capture new, heretofore elusive, insights? These and other questions await precise consideration.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

  ......................................................................................................................

 

. I

.................................................................................................................................. N Morphology belongs to the set of frameworks that are characterized as inferential-realizational in the well-known typology set out by Stump (). It therefore has much in common with Stump’s Paradigm Function Morphology. Network Morphology is inferential (as opposed to lexical) because it treats morphology as a matter of the application of rules to lexemes, rather than assuming that there are pieces of words (i.e. morphemes) that have their own lexical entries and attach to lexical roots (Brown and Hippisley : ). Network Morphology is realizational (as opposed to incremental), because morphology is licensed by the requirements of the appropriate morphosyntactic features, rather than features being incrementally acquired through the addition of exponents. In virtue of having these properties it allows for the separation of morphological form from the morphosyntactic features that are being expressed, and the lexeme is central to all of this because it belongs to, or inherits from, a specific set of rules that determine the forms required for its paradigm. To characterize Network Morphology as inferential-realizational is insufficient, of course, to describe the framework in its entirety. It is also inheritance-based. Ever since its original inception in work reported by Corbett and Fraser (), Network Morphology has treated the lexicon as an inheritance network where generalizations about sets of words are located at nodes in the network. Higher-level nodes make generalizations that can potentially apply across the lexicon, while lower-level nodes may represent smaller classes, with information being inherited by the terminal nodes in the hierarchy that are the lexical entries. These lexical entries are not the word forms required for a particular cell in a lexeme’s paradigm. Instead they are lexemes (generalizations over paradigms). In virtue of the inheritance mechanism, it is possible to infer the full paradigm for any given entry. This means that Network Morphology can minimize the size of the lexical entry, in contrast with ‘full entry’ theories, such as the Construction Morphology approach (Booij a: ; Masini and Audring, Chapter  this volume; Jackendoff and Audring, Chapter  this

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

volume). Furthermore, because Network Morphology is inferential (i.e. morphology arises from the application of rules), what is inherited are sets of rules defining the lexeme’s paradigm. Each node in the network can be a source of several rules defining part of a lexeme’s paradigm. Not only is Network Morphology inheritance-based, it uses a particular type of inheritance, namely default inheritance. Furthermore, as we will see in §., at its core Network Morphology uses the mechanism of default inference to achieve this. This is based on a kind of Pāṇinian determinism (Stump : –) that is quite strict, because it imposes an ordering on morphosyntactic features, and it also means that the way inheritance works in Network Morphology is naturally associated with a key idea in morphology. It allows information from inheritance sources to be overridden. It makes sense to use default inheritance to model the morphology of languages, because morphology can be subject to many exceptions and irregularities. Allowing for overrides lets us observe what the core parts of the morphological system are. If we also distinguish between two different notions of default, the normal case default and the exceptional case default, we can also see that much irregularity is often not about resorting to something that is totally outside the general rule system, but instead involves reverting back to a more general rule. We discuss this in §... Being able to allow for overrides has a number of other advantages in terms of modelling morphology. For instance, as we see in the case study dealing with the morphological complexity of Nuer in §.., it is possible to quantify how well the lexicon performs in accounting for generalizations by counting the number of overrides required in the lexical entries. In §.. we briefly discuss the application of Network Morphology to the study of diachrony. We first turn in §. to the representation of morphological information in Network Morphology.

. T N M 

.................................................................................................................................. In this section we make a distinction between the full morphological model, which is a structured representation of the facts to be explained, and the analysis that leads to that model. We define the full morphological model in general terms in (). ()

Full morphological model Complete set of forms of lexemes and associated features relevant for syntax. (based on Brown and Hippisley : )

To illustrate some of the key ideas, we look at the Chukotko-Kamchatkan language Koryak. This will show the role of defaults in capturing similarities between declensions and allowing for differences. We will also consider the relationship between the features of number and case, for which Koryak poses a particular challenge. Table . represents what the full morphological model would say about the two major types of noun in Koryak. These are represented in DATR notation (Evans and Gazdar ). DATR is the knowledge representation language used to represent Network Morphology theories, because it can be used to implement default inheritance hierarchies.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Table .. Koryak noun declensions Declension I ‘father’

Declension II ‘papa’

En'pič: = father. = e n'p i _č. = e n'p i _č i _k. = e n'p i _č i _te. = a n'p e _č e _ŋqo. = a n'p e _č e _jpəŋ. = a n'p e _č e _ŋ. = a n'p e _č e _jtəŋ. = e n'p i _č i _n u. = e n'p i _č i _kj i _t. = e n'p i _č i _j i _t e.

Appa: = papa. = appa. = appa _na _k. = appa _na _k. = appa _na _ŋqo. = appa _na _jpəŋ. = appa _na _ŋ. = appa _na _jtəŋ. = appa _na _n o. = appa _na _kj e _t. = appa _j e _t a.

= e n'p i _č i _w. = e n'p i _č i _k. = e n'p i _č i _te. = a n'p e _č e _ŋqo. = a n'p e _č e _jpəŋ. = a n'p e _č e _ŋ. = a n'p e _č e _jtəŋ. = e n'p i _č i _n u. = e n'p i _č i _kj i _t. = e n'p i _č i _j i _t e.

= appa _w. = appa _jə _k. = appa _jə _k. = appa _jə _ka _ŋqo. = appa _jə _ka _jpəŋ. = appa _jə _kə _ŋ. = appa _jə _ka _jtəŋ. = appa _jə _čge _n o. = appa _jə _kj e _t. = appa _jə _ka _j e _t a.

= e n'p i _č i _t. = e n'p i _č i _k. = e n'p i _č i _te. = a n'p e _č e _ŋqo. = a n'p e _č e _jpəŋ. = a n'p e _č e _ŋ. = a n'p e _č e _jtəŋ. = e n'p i _č i _n u. = e n'p i _č i _kj i _t. = e n'p i _č i _j i _t e.

= appa _nte. = appa _jə _k. = appa _jə _k. = appa _jə _ka _ŋqo. = appa _jə _ka _jpəŋ. = appa _jə _kə _ŋ. = appa _jə _ka _jtəŋ. = appa _jə _čge _n o. = appa _jə _kj e _t. = appa _jə _ka _j e _t a.

Note: We do not discuss here the differences in form that arise from vowel harmony. The designative suffix ‑no/‑nu is the same suffix in both declensions, for instance. The Network Morphology analysis is available from http://networkmorphology.as.uky.edu/theory/chkoryakdtr. Source: Based on Žukova ().

We can make the following generalizations about the two declensions: • In both Declension I and Declension II the absolutive case distinguishes three different numbers (singular, dual, and plural). • Declension I has its own ergative case marker, while in Declension II ergative is syncretic with locative throughout.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

• In Declension I number is not distinguished at all outside of the absolutive. • In Declension II, outside of the absolutive, different forms of the stem distinguish singular (appana-) and plural (appajə-, appajəka-, appajəkə-, or appajəcˇge-). • In Declension II, outside of the absolutive, the dual has the same forms as the plural. Our task is to account for the different behaviours in these two types of noun and explain how we arrive at the complete set of forms for these nouns in such a way that we can generalize what they share and capture what is specific to each type. This is the role of morphological analysis. ()

Morphological analysis A sufficiently minimal and optimal description of a language’s morphological system such that, by applying the appropriate rules of inference, a full morphological model can be obtained. (Brown and Hippisley : )

From the morphological analysis we can infer a complete morphological model. We can speak of each of the lines in Table . as equations or facts, or rules (see Brown and Hippisley : ). For instance, the equation () tells us that the singular ergative form of the lexeme appa is appanak. ()

= appa _na _k.

Equations in the morphological model are represented using a single equals sign (=), as in (), while those in the morphological analysis are represented using a double equals sign (==), which we will see shortly in ().1 Each equation consists of a left-hand side (LHS) and a right-hand side (RHS). From a morphological analysis we can obtain a morphological model in which the sets of forms of lexemes are described using equations where the morphosyntax is provided in the LHS (e.g. ) and the associated form in the RHS (e.g. appa _na _k), as is the case for the two lexemes in Table .. In a morphological analysis the RHS either specifies a form directly or provides a path representing an inheritance source for a form. Every LHS consists of a path, represented by angle brackets. A path contains attributes. Attributes can be used to represent morphosyntactic features, such as sg or erg, as well as other types of information, illustrated in Table . by mor, which represents the linguistic component (morphology), or by gloss, which provides a translation for the Koryak lexeme. Paths in Network Morphology representations require the attributes to be ordered. This ordering is important, and where attributes represent certain morphosyntactic features

1

This distinction follows from the use of DATR as the language for representing Network Morphology theories. In DATR = is used to represent extensional statements and == is used to represent definitional statements (Evans and Gazdar : ). This is an important distinction, because extensional statements are typically only implicit, as they can be inferred from definitional statements.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Network Morphology imposes an interpretation of the ordering as a kind of implicit typing. An alternative representation of the information in () would type the features as in (). ()

{MODULE:mor NUM:sg CASE:erg} = appa _na _k.

In () MODULE:mor contrasts the information with that for, say, syntax, which would be syn. Here the other types are number (NUM) and case (CASE). The use of ordering defines a particular shape to the paradigm, so that number is treated as the first level of differentiation, for instance. That is, instead of using the explicit typing in (), Network Morphology uses the ordering within the paths to express constraints on the behaviour of morphosyntactic features with regard to phenomena such as syncretism and splits in the paradigm, as discussed in Brown and Hippisley (: –). (See Corbett  for a complete typology of paradigm splits.) The ordering of attributes in a path is important when we come to consider the relationship between the full morphological model and the morphological analysis. We can think of the morphological model as a full-entry lexicon. In contrast, the morphological analysis is an inheritance network that is a compact representation from which the fullentry lexicon can be generated by applying rules of inference. (The rules of inference are those provided by the DATR language.) One way in which the morphological analysis can be compact is through underspecification of attributes in paths. This means that paths in the morphological analysis vary in the number of attributes. (Throughout this chapter we take ‘underspecification’ to mean the partial representation in the morphological analysis of the featural information required in the full morphological model.) While the path in () contains the attributes mor, sg, and erg, some of the rules in the morphological analysis from which the singular ergative form can be inferred contain fewer attributes than this. For instance, as we shall see for () later, there is an equation that is involved in the inference of the singular ergative that refers just to the singular without being specific about case. In order to understand the role of attribute ordering we require the concept of path extension. In (a) we see the empty path. This is the path that contains zero attributes. All paths are extensions of the empty path. The path in (c) is an extension of the path in (b). The path in (d) is an extension of (c) and also of (b). () a. b. c. d.



An important feature of morphological analyses in Network Morphology is the application of default inference. This is a fairly constrained form of Pāṇinian determinism (Stump : ) in that, in the absence of the exact path we require, we default to the most specific matching path. For example, if the syntax required the singular ergative form, in the morphological analysis we would look for an equation that has the LHS in (d). In the absence of a rule with (d) as an LHS, we will resort to (c) (i.e. ) as the most specific matching path for . As we have noted, Network Morphology assumes a particular ordering of the attributes that represent morphosyntactic features. Here we only discuss the ordering of case and

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

number.2 Note that the case feature erg in (d) extends the path containing the number feature in (c). Brown and Hippisley (: ) argue for this ordering of case and number, articulated in (), as a constraint on the attributes that applies to the full morphological model, such as the full paradigm in Table .. It naturally follows that the morphological analysis must also obey these constraints, and this therefore determines how default inference can be applied. ()

Number and case In paths containing case and number attributes, case attributes will extend number attributes (e.g. ). (Brown and Hippisley : )

This ordering of attributes is taken as reflecting cross-linguistically general patterns for morphosyntactic feature structure. It can account for the simplest types of syncretism. Syncretism is the situation where different feature combinations share the same form (Baerman, Brown, and Corbett : ). Syncretism also has different causes. One of the simpler kinds, based on neutralization, reflects the loss of distinctions that are syntactically relevant. In Russian, for instance, in the plural gender distinctions are neutralized on all agreement targets, as is often true of other languages with gender. By treating gender as an extension of number (i.e. ordering gender after number in paths) this falls out naturally, as the gender information need not be specified in the rules describing the plural form of agreement targets, such as adjectives. Over-differentiation is another phenomenon that can be accounted for by orderings, such as that in (). This is where an extra distinction is made in one part of the paradigm for a small number of items. For instance, Russian has over-differentiation in the singular (i.e. extra case values appearing in the singular), in that there are two additional cases for a small number of items, the second locative and the second genitive (Brown and Hippisley : –).3 The Russian second locative is an extra case which is relevant for syntax, because the prepositions v ‘in’ and na ‘on’ assign it. This type of extension or ordering reflects feature structure distinctions that are relevant for syntax. The feature ordering, however, accounts only for the simple instances where behaviour in the morphology correlates to some extent with what we expect from the syntax. In the Russian over-differentiation examples, we know that, while the extra cases are relevant for assignment by prepositions, there are no extra distinctions in the plural. The ordering in ()

2 See Brown and Hippisley (: –) and references there for an exposition of some of the other constraints, including discussion of problematic cases, as well as an explanation of how the ordering of number and gender relates to Greenberg’s (/: ) Universal . 3 More accurately, the Network Morphology treatment of case and number means that we allow for differential behaviour across the numbers. In constructions of the type idti v soldaty ‘become a soldier’ (lit. ‘go into the soldiers’) (Švedova : §), where the complement of the preposition is an animate noun typically denoting a profession, the noun has a form which is the same as the nominative plural, rather than the usual accusative plural that is identical with the genitive plural. One analysis of this is that in this structure the preposition governs the nominative (plural). This would have to be a second nominative form that occurs in the plural only. However, in this case it would still be conditioned by number, as it does not occur in the singular.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



brings with it the expectation that while we might expect extra cases for a particular number, the opposite (extra numbers in a particular case) is not so likely. However, because morphology can have a life of its own, it is possible to observe both patterns that fit with the ordering and those which do not. Koryak presents a challenge for the ordering in (), because the absolutive case distinguishes dual number, while the other cases do not. There is some evidence for the cross-linguistic tendency associated with the attribute orderings when one considers languages with syncretism. Baerman, Brown, and Corbett () looked at the cross-linguistic prevalence for syncretism among different features. They used the data from the Surrey Syncretisms Database to do this. The database aimed to be an exhaustive description of syncretism in thirty genealogically diverse languages. Syncretisms are represented as pairs of morphosyntactic descriptions. Consider the boxed syncretism in (). ()

Locative/dative syncretism in a-stem nouns in Slovene (Baerman, Brown, and Corbett : ) ‘grove’ nom sg acc sg gen sg loc sg dat sg ins sg

dobrava dobravo dobrave dobravi dobravi dobravo

We can represent the syncretism in () as a pair of morphosyntactic descriptions, as in (). ()

Feature interaction (number and case) N C  

 

Baerman, Brown, and Corbett () investigated syncretism in the Surrey Syncretisms Database by transforming the pairs that represent syncretisms into characterizations in terms of the notions ‘Context’ and ‘Syncretic’. Where the feature values were the same they were characterized as ‘Context’. Where the feature values differed they were characterized as ‘Syncretic’. (The latter is, of course, the reason why the syncretism was included in the database.) The corresponding transformation of () yields (). ()

Feature interaction (number and case) N C Context

Syncretic

While it is possible for number to be syncretic in the context of case, Baerman, Brown, and Corbett () noted that the tendency represented by () is the more frequent. Baerman, Brown, and Corbett () also make the following generalization:

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

()

‘For a word class domain D in language L If values from the feature case serve as context for a syncretism in D, then values from the feature case must be syncretic elsewhere in D.’ Baerman, Brown, and Corbett (: )

This means that () is also possible, but only if () is also found. ()

Feature interaction (number and case) N C Syncretic Context

This is a generalization that holds of the features case and number as a whole, rather than individual values of those features. Of course, syncretisms with the structure of () show that feature ordering is insufficient to describe all syncretisms, and our Koryak example in Table . cannot readily be described in terms of attribute ordering in () alone. But the Koryak paradigm in Table . does obey the generalization (), because there is also ergative and locative syncretism (appanak). We repeat the equations from Table . that show this: ()

= appa _na _k. = appa _na _k.

The syncretism of ergative and locative in Declension II also occurs across numbers. The dual locative also has the same form as the dual ergative (appajək): ()

= appa _jə _k. = appa _jə _k.

The plural locative has the same form as the plural ergative (appajək): ()

= appa _jə _k. = appa _jə _k.

The pair in () can be reduced to a characterization of the form represented in (). The forms in () and () as individual pairs of syncretisms will also reduce to the form represented in (), but the dual-plural syncretism between the forms in () and () also means that syncretism pairs of the form () exist.4 Koryak exhibits syncretisms of the form () and (). This still accords with (), because Koryak does not exhibit syncretisms only of the form (). How are these different types of syncretism accounted for in Network Morphology? In §.. we will show that the syncretism of ergative and locative results from underspecification of case in line with (), while the number syncretism is the result of generalized referral (Brown and Hippisley : –). 4 Syncretism pairs such as plural locative and dual ergative can also be identified. These reduce to characterizations in which both number and case are ‘Syncretic’. See Baerman, Brown, and Corbett (: ) for discussion.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



MOR_NOUN

DECL_1

DECL_2

 .. A default inheritance hierarchy for Koryak nouns

Morphological analyses in Network Morphology are grounded in default inheritance networks. There are two types of representation of any Network Morphology theory: an informal pictorial representation and the implementation in the DATR language that can be tested to determine whether it generates the correct forms. A simple default inheritance hierarchy for Koryak nouns is represented pictorially in Figure .. Figure . consists of three nodes: MOR_NOUN, DECL_1, and DECL_2. The nodes can be understood as locations for rules about different morphological classes. So MOR_NOUN is a source of generalizations about the morphology of nouns, while DECL_1 is a source of information about first declension nouns and DECL_2 is a source of information about second declension nouns. The lines in Figure . represent inheritance, which can be thought of as flowing downwards. Rules about the morphology of Koryak nouns may be inherited from MOR_NOUN by DECL_1 and DECL_2. The DATR representation of the hierarchy in Figure ., including some of the rules housed at each of three nodes, is given in (). (Where rules have been omitted, this is marked by ellipses.) ()

MOR_NOUN:

== == "" "" == "" "" == "" == "" == "" _w == _k ... DECL_1:

== MOR_NOUN == "" _t == _te. DECL_2:

== MOR_NOUN == "" _nte ...

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

In a morphological analysis equations may have paths in their RHS, or forms, or a combination of these. Equations may also specify nodes as inheritance sources. In () at both DECL_1 and DECL_2 there is the equation == MOR_NOUN, with a node (MOR_NOUN) in its RHS. This equation corresponds to the inheritance lines in Figure .. Recall from our discussion of () that default inference determines the application of the equations. For instance, consider the inference of the singular absolutive form () of a noun that inherited from DECL_1. At DECL_1, for which all equations are given in (), the singular absolutive form is not specified. We therefore need to use the most specific path at DECL_1 of which is an extension. The most specific matching path of which is an extension is . This means that the equation == MOR_NOUN applies, and we look for the value for at the node MOR_NOUN. At MOR_NOUN there is an equation that matches with this query exactly. So the specification of singular absolutive is inherited from MOR_NOUN by DECL_1. In contrast, the dual absolutive is specified directly at DECL_1 and not inherited from MOR_NOUN. Furthermore, if we wished to know about the dual form for, say, the ergative, this would also require inheritance from MOR_NOUN because is the most specific match at DECL_1 for a query about the dual ergative. At the node MOR_NOUN the most specific match for this query is underspecified for case and says that extensions of the dual are referred to extensions of the plural. The hierarchy in Figure . is a default inheritance hierarchy, because rules at MOR_NOUN may be overridden by DECL_1 and DECL_2. As we have noted, the default inheritance relations between nodes are based on the default inference mechanism. Any generalization made at MOR_NOUN will be inherited by DECL_1 and DECL_2, unless there is a specific equation given at DECL_1 or DECL_2 that matches with the query and therefore overrides what is given at MOR_NOUN. Most of the case suffixes are given at MOR_NOUN, and these will in turn be inherited by nouns of the two declension classes. For Koryak we also need to stipulate information about the stems involved. In () all of the paths that occur in the RHS are enclosed in quotes. This is global inheritance. In () there is an equation at MOR_NOUN that says that the singular absolutive is by default stem  ( == ""). Because it is specified for global inheritance, this means that the value for "" will be determined by the lexeme which is being queried. If used without quotes, RHS paths refer to their own node for the value referred to. This is known as local inheritance. For instance, if appears without quotes, this means that the value associated with it should be found locally at MOR_NOUN. But this would mean that would be limited to one form, as opposed to referring to all possible stems. Global inheritance is therefore a way of making systematic generalizations about forms, such as stems, that are very dependent on what is specified for, or inherited by, individual lexemes. The default inheritance analysis in () can account for our earlier generalizations about the noun system of Koryak. • In Declension I and Declension II the absolutive case distinguishes three different numbers. The analysis says that the singular absolutive of all nouns is stem . The plural absolutive is stem  plus the ending ‑w. These generalizations are inherited by both declension classes.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Both Declension I and Declension II override the default statement that the dual is the same as the plural, by specifying their own dual absolutive forms. This override takes place because the path is more specific than the path . • Declension I has its own ergative case marker, while in Declension II ergative is syncretic with locative throughout. At the node MOR_NOUN there is an equation that says that the singular consists of stem  plus a suffix. This is repeated as (). ()

== "" ""

This means that the singular will be realized by a particular stem, stem , and a suffix. The rules for forming the different stems are specified in a separate component of the fragment for Koryak, not given here. The stem 3 form for enpicˇ ‘father’ is enpicˇi-.5 The morphological model in Table . specifies a form for .6 In order to obtain this form, the morphological analysis represented by the hierarchy in () is queried for this path. There is, however, no LHS path in the morphological analysis in (). Instead the form is provided by default inference. As exemplified in (), the path is an extension of . So, by default inference, the rule in (), specified at the node MOR_NOUN, will provide the realization for . The next part of the equation in () specifies the suffix . Again, this is global inheritance (i.e. quoted), because the value associated with this path may not be available locally, but it is dependent on what is inherited by the lexeme. As we shall see in our discussion of generalized referral in §.., the rules of inference that Network Morphology makes use of require that if we are looking for an extension of an LHS path this will also extend any RHS path referred to. Because we are inferring the form of using the equation with the LHS path , the attribute erg extends the RHS paths in () and will therefore extend as . The attribute erg also extends the path , but as the theory never specifies extensions for , , , or , this has no additional effect on the choice of stem. In contrast there is an equation with the LHS path to be found at the node DECL_1, repeated in (). ()

== _te

The ergative singular form of the Declension I noun enpicˇ ‘father’ is therefore enpicˇite. The non-absolutive forms of the dual and plural work in a similar way. The syncretism of locative and ergative in Declension II is accounted for by equations at MOR_NOUN, repeated here as ().

The path consists of two attributes, stem and 3. Note that Table ., which provides information of the type required in the morphological model, says nothing about stems, as this is concerned only with what is relevant for syntax. 6 The features and values used for the morphological analysis are those for which there is evidence in terms of form and distribution, such as in the method outlined by Comrie (). 5

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 ()

  == "" "" == "" "" == "" == _k

The singular and plural differ in terms of their stems, but as there is no ergative suffix specified at either DECL_2 or MOR_NOUN, Declension II nouns use the default suffix ‑k, which lacks a specification for case. (To pre-empt the terminology we use in §.., ‑k is the ‘normal case default’ for ergative in Declension II, while ‑te is the ‘normal case default’ for ergative in Declension I.) As there is no locative case suffix specified anywhere, the suffix ‑k will also be used for the locative in the singular. For both declensions the suffix ‑k will also be used to express locative case in the plural. (That is, ‑k is also the ‘normal case default’ for locative.) It will also be used to express the locative case in the dual, because this refers to the plural. • In Declension I number is not distinguished at all outside of the absolutive. The lack of number marking outside the absolutive in Declension I results from the identity of stem  and stem  (while Declension II distinguishes these), and the fact that the dual refers to the plural. (Stem  and stem  are reserved for use with the absolutive forms.) • In Declension II, outside of the absolutive, different forms of the stem distinguish singular and plural. Declension II nouns have different stem  and stem  forms. • In Declension II, outside of the absolutive, the dual has the same forms as the plural. This is accounted for by the generalized referral we discuss in §... The equation at MOR_NOUN, repeated here as (), says that extensions of the dual are the same as extensions of the plural. () == "" This equation is overridden by the dual absolutive forms specified for both types of noun, because the case information extends the number information. Koryak is interesting in this regard, because although it does not show the typical instances of case and number syncretism suggested by () it does exhibit morphological patterns for which the feature structure required by () has a role. That is, it obeys the generalization in (), and the feature ordering on which () relies, as we discuss in §.., involves the simultaneous use of an underspecified structure in conformity with (), combined with a referral.

.. Generalized referral As noted, Koryak presents a serious challenge if we assume that syncretisms can only be accounted for in terms of the feature structure that typically reflects syntactic relevance, in

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Table .. Latin second declension nouns nom sg acc sg gen sg dat sg abl sg

bellum ‘war’

servus ‘slave’

vulgus ‘crowd’

bell-um bell-um bell-ī bell-ō bell-ō

serv-us serv-um serv-ī serv-ō serv-ō

vulg-us vulg-us vulg-ī vulg-ō vulg-ō

Source: Discussed in Baerman, Brown, and Corbett : .

this instance the ordering of case and number attributes in ().7 Another way of accounting for identities of form is to use referrals (Zwicky a; Stump a, : –). The idea here is that a particular form is directly associated with a certain feature combination and that another feature combination refers to that combination for its realization. For instance, in Table ., the realization ‑um is associated with the accusative singular for the Latin noun servus ‘slave’ and the realization ‑us is associated with the nominative singular for the same noun. The distribution of these forms for servus suggests the morphosyntax with which they are primarily associated. For the noun bellum ‘war’ the nominative singular can be analysed as referring to the accusative singular for its realization, while for the noun vulgus ‘crowd’ the accusative singular can be analysed as referring to the nominative singular for its realization. So we see here that ‑um and ‑us are primarily taken as exponents of accusative singular and nominative singular respectively. There is a clear directionality to the syncretism. This is an example of ‘divergent bidirectional syncretism’ (Baerman, Brown, and Corbett : –), where the syncretism goes in both directions. It is a straightforward matter to use rules of referral to account for both patterns, while symmetrical approaches (i.e. those not based on referrals) are forced to treat aspects of the pattern as accidental. (See Baerman, Brown, and Corbett : – for further discussion of the arguments for rules of referral.) As we saw with the Koryak example, in Network Morphology referrals are modelled as pairings of LHS paths and RHS paths, as illustrated in () for Latin nouns of the bellum type (a), and nouns of the vulgus type (b). (Note that we have indicated in brackets which type each rule applies to, but this is not part of the rule notation.) () a. == "" b. == ""

(bellum type) (vulgus type)

The fact that referrals in Network Morphology take the form of an LHS path paired with an RHS path also means that extensions of the LHS path will also be extensions of the

7

For detailed discussion of the ways that syncretisms may, or may not, reflect syntactic relevance see Baerman, Brown, and Corbett (: –).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

RHS path. In the Latin example this property is not relevant, because beyond the case and number combinations there are no other morphosyntactic features associated with nouns in their paradigm. Network Morphology can combine referrals with the mechanism of default extension to yield generalized referrals. A significant advantage of these is that they can express generalizations about whole sets of paradigm cells, rather than merely state relationships between single paradigm cells. ()

Generalized Referral a. One feature specification (the goal) may refer to another feature specification (the source) for its realization. b. As with other realization rules, referrals may be underspecified. c. Extensions of the goal will be realized by extensions of the source.

In order to deal with the syncretism of the dual and plural that occurs outside of the absolutive case in Koryak, the equation in (), repeated as (), is sufficient. ()

== ""

The underspecified equation () states that by default, if we need the dual form we can use the plural form. Because of (c), if we require any extension of the dual, this will be the same as the corresponding extension of the plural. This means that it is possible to refer not just to a single paradigm cell, but to whole portions of the paradigm, requiring identity of exponence between those cells, determined according to the case extension of the number, and this follows from the stipulation of case after number in (). For instance, to infer the form of the dative dual it is sufficient to know the form of the dative plural. The RHS path involves global inheritance, because it will depend on the form of the plural for the given lexical item. The specification in () is sufficient to cover all forms of the dual for both types of Koryak noun. A key feature of this system is that generalized referrals are a simultaneous combination of both default inference and referral, and the challenging data of Koryak shows that we need to use these mechanisms together. Consider, for instance, the syncretic locative plural and ergative plural form of appajək of the noun lexeme appa. The dual is, of course, realized by the corresponding sets of plural forms, as () requires. The syncretism of the locative and ergative cases of appa arises in a similar way to the syncretism in the singular that we discussed earlier. The corresponding dual form therefore arises from both () combined with the default inference (underspecification-based) approach to the case syncretism. The generalized referral mechanism finds application across a diverse range of unrelated languages, including Dalabon and Slovene (see Brown and Hippisley : –).8

8 As they pick out whole sub-paradigms, generalized referrals are also a good mechanism for treating deponency. Corbett (b: ) characterizes canonical deponency as affecting whole slabs of the paradigm.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



. C 

.................................................................................................................................. We have seen how rules, such as generalized referrals, could be located at different points in the simple inflectional hierarchy for Koryak nouns in Figure .. At the top node in this hierarchy we located default information, including the generalized referral of the dual to the plural: this is true for Koryak Declension I where all number distinctions are collapsed outside of the absolutive, and it is true for Koryak Declension II where the dual is the same as the plural outside of the absolutive. In this section we consider three case studies. The first illustrates a finer distinction between two types of default, the second tackles a particularly challenging instance of syncretism, and the third shows how default inheritance systems may be restructured in a diachronic account.

.. The normal case default and the exceptional case default When a default realization is overridden in a lexical entry it is typically the case that a lexeme resorts back to a more general rule. A lexical item can specify an inheritance link that goes back to the highest default rule, rather than inheriting the value associated with its class.9 In fact, we need to distinguish between two different types of default, the exceptional case default and the normal case default. This distinction was first introduced by Fraser and Corbett () in their treatment of the noun class and gender system of Arapesh, drawing on Fortune’s () grammar and associated work by Aronoff (, : –). The distinction between exceptional case and normal case default can be explained using the following non-linguistic analogy from Evans, Brown, and Corbett (: ): Mary and John both work for a firm based in London. Mary is the personnel manager and works in the office in London. Occasionally, she goes to Paris on a training course. By default, then, Mary works in the office in London. John is a salesman. He normally spends Mondays in the south of England, Tuesdays in the west, and Wednesdays and Thursdays in the north. If, however, a client cancels an appointment, or he has a problem with his car, or there is a department meeting, he goes to the office in London. On Fridays he often plays golf, but if it rains he goes to the office. By default, then, John also works in the office in London. Intuitively the two cases are rather different. Mary is ‘normally’ at the office, John is not. And yet at a higher level of abstraction the office is the default workplace for both. It is these two types of default, both reasonable uses of the term, that have led to differences in usage in the literature, and to confusion. This is why we make the distinction: for Mary, working at the office in London is the normal case default, while for John, working in London is the exceptional case default. (Evans, Brown, and Corbett : ) 9 Network Morphology is a framework that uses orthogonal multiple inheritance (Brown and Hippisley : –). So it is possible to inherit from multiple sources. As multiple inheritance is orthogonal, the path specifications for inheritance cannot be the same. This means that contradictions do not arise. This property arises from the fact that default inference relies on different degrees of specificity for paths, as we saw in ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Table .. Normal case and exceptional case defaults for the nominative plural of nouns with stem/ending stress pattern      tórmoz ‘brakes’ ókorok ‘ham’ snég ‘snow’ lesosád ‘country park’ dólg ‘debt’ grób ‘coffin’

    

tormoz–á okorok–á sneg-á lesosad-í dolg-í grob-í

Source: Brown and Hippisley (: ).

As an example, let us consider the realization of the nominative plural for Russian nouns. The normal case default for Russian nouns is to have a nominative plural form in ‑i. Of the four major declensions in Russian, this is the form associated with three of them, including the largest declension, Declension I. However, for the subset of nouns belonging to Declension I that have ending stress in the plural and stem stress in the singular (a less common stress pattern) the nominative plural exponent is ‑a. For this class of nouns ‑a is the normal case default. However, for some nouns with this stress pattern the nominative plural form in ‑i is used. Their exceptional case default is what is the normal case default for other nouns, namely ‑i (see Table .). Brown and Hippisley’s () analysis is based on a computational implementation for the first  most frequent Russian nouns based on Zasorina’s () frequency dictionary. There are seventy-seven nouns in the lexicon of  most frequent nouns that belong to the appropriate stress pattern and belong to Declension I. Of these, forty-seven follow the normal case default for the group (i.e. like sneg ‘snow’), while thirty use the exceptional case default (i.e. like grob ‘coffin’). It should be noted that it is possible for new items with the required stress pattern, especially for specialist terminology, to follow the normal case default ‑á. Hence, this is an important distinction that constitutes substantive linguistic knowledge. Total irregularity (i.e. the introduction of a completely new form) is extremely rare. In most cases, what we observe is that a lexical item is resorting back to a very general pattern, even though the rules associated with the class with which it fits most closely would predict another realization. In terms of the formal representation we can identify the exceptional case default, because it will be specified in the lexical entry. However, the specification does not involve direct stipulation of the realization. Instead it involves a link back to a rule located high in the inheritance structure.

.. Morphological complexity in Nuer In this section we discuss Baerman’s () analysis of the nominal system of Nuer based on Network Morphology’s use of default inheritance. Nuer is particularly challenging as the extent of the case syncretism means that it is hard to identify a consistent meaning for a

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Table .. Inventory of Nuer noun suffixes     /   

Ø Ø, kä, ä Ø, ni

Source: Baerman (: ).

Table .. Nuer inflectional classes

           

‘milk’ ‘kind of tree’ ‘bump’

‘rank’

‘potato’ ‘fat’

‘hair’

‘ring’

cak caak caak cak cak cak-ni

gatɔt gatɔt-kä gatɔt-kä gatuut-ni gatuut-ni gatuut-ni

tac tac-kä tac tac-ni tac-ni tac-ni

nhim nhim nhim-kä nhiäm nhiäm-ni nhiäm-ni

nyaŋyεt nyaŋyεt nyaŋyεt-kä nyaŋyεt-ni nyaŋyεt-ni nyaŋyεt-ni

kε̈c kε ̈c-kä kε ̈c-kä kεεc kεεc-ni kεεc

pony pony-kä pony-kä poony poony-ni poony-ni

liεth liεth-kä liεth lith lith-ni lith-ni

Source: Baerman (: ).

given suffix, and the different possible combinations of affixes mean that there are a large number of inflection classes. Nuer is a Western Nilotic language of South Sudan, as well as parts of Ethiopia, and Baerman, who takes his data from Frank (), presents an analysis of implicational relations between elements of the paradigm, so as to identify potential principal parts. Principal parts are those combinations of cells that serve to predict other parts of the paradigm. Stump and Finkel (), who present a comprehensive typology of principal parts, note that a canonical principal part is highly predictive, but highly unpredictable (: –). (For more on principal parts see Blevins, Ackerman, and Malouf, Chapter  this volume.) The analysis that Baerman presents shows that there is an identifiable system and his implemented analysis allows him to quantify exceptions in such a way that we can determine how well it works. The inventory of suffixes for Nuer nouns is actually not that large (Table .). The case syncretism in Table . is in itself not that surprising, but the number of inflectional classes which arise when we consider the different combinations of suffixes is quite remarkable. Table . illustrates some of the possibilities. Baerman uses his Network Morphology analysis of the kinds of distributions observed in Table . to discuss the two different approaches required to account for such variable distributions. These are ‘blocking’ and rules of referral.10 Blocking and related notions, of course, are important and familiar mechanisms in morphology (Anderson , : ;

10 Baerman () talks of a general class of rules that he terms ‘extension’. The term is used in the sense that one form is extended to another part of the paradigm, through a mechanism such as rules of referral. We stick to rules of referral here so as not to confuse this notion with path extension introduced earlier.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Table .. Distributions illustrating the problem with blocking (underspecification) or referral-based approaches to Nuer noun morphology

     

‘stone’

‘umbilical cord’

‘peace’

‘sky’

döl döl-kä döl-kä

caar caar-ä caar-ä

mal mal-ä mal-kä

puäär puäär-kä puär-ä

Source: Baerman (: ).

Kiparsky ; Aronoff ; Stump ). We saw in our analysis of the case syncretism in Koryak in §. that Pāṇinian determinism has an important place in Network Morphology in terms of the specificity of paths, which are also constrained by the ordering in (). Indeed, Network Morphology exploits the rule-based nature of inferential-realizational approaches to the full by integrating the rules into an inheritance network. Baerman argues that it is not possible to describe the Nuer patterns in terms of either blocking or rules of referral. Baerman illustrates why this is with patterns from four nouns, given in Table .. As Table . illustrates, the exponent -kä can have a distribution in which it is syncretic between the genitive singular and the locative singular (see döl ‘stone’). The exponent -ä can also be syncretic between the genitive singular and the locative singular. But in other paradigms, such as those for mal ‘peace’ and puäär ‘sky’, there is no syncretism. If we tried to analyse the paradigms for ‘peace’ and ‘sky’ in terms of underspecification, however, there would be a problem. This requires us to allow a morphosyntactically more specific form to fill one of the paradigm cells. But it is not possible to identify a narrower morphosyntactic specification for either -ä or -kä. For example, in order for -ä to block -kä’s occurrence in the appropriate part of the paradigm of mal ‘peace’, it would need to be specified genitive singular. But this would fall foul of ‑ä’s distribution in puäär where it is locative singular. Likewise an analysis using rules of referral would also be problematic, because, given the inability to identify the primary meaning of any exponent, it is unclear what is being extended into what. Given the problems which arise for underspecification and referrals in analysing the Nuer noun paradigm, Baerman raises the possibility that the distribution is the product of accidental homophony and that, for instance, there would be distinct affixes ‑ä, ‑ä, ‑kä, ‑kä in the genitive and locative singular of the respective paradigms, with syncretism arising from the occurrence of the two accidentally identical affixes in the same paradigm. However, as Baerman argues, this would fail to account for the fact that the suffixes do have a coherent maximal distribution, namely genitive and locative singular combined, as shown by the forms in Table .. The exponents ‑kä and ‑ä are maximally genitive and locative singular, for instance, and the exponent ‑ni, for which the same problems with underspecification and referral-based analyses arise, has a maximal distribution as plural. As Baerman notes, while accidental homophony is a real phenomenon in natural languages, one would be required to make the assumption that each of the realizations in Table . was the subject of accidental homophony. But this is not a plausible assumption, as the homophony would be so consistent as to hardly be considered accidental.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Baerman notes that Nuer presents serious problems for concepts such as Paradigm Economy and No Blur (Carstairs ; Carstairs-McCarthy ), because the number of classes drastically exceeds what would be expected given the set of suffixes (violating Paradigm Economy) and it is hard to identify a single default (contra No Blur). As it can account for large numbers of classes in terms of an inheritance network, with different levels of default, Network Morphology is able to address the challenges Nuer poses. Given the possible combinations of affixes, Baerman identifies twenty-four possible inflectional classes based on Frank’s () corpus.11 Nuer also has stem alternation patterns. Baerman presents data to show that the stem alternation is not typically predictable from the suffixation. Stem alternations include vowel length, diphthongization, vowel quality, and phonation type. They are also reversible so that, for instance, it is possible to have long vowels in the singular opposing short vowels in the plural, or long vowels in the plural opposing short vowels in the singular. What is important is the positioning of the stem alternation within the paradigm. Baerman (: ) notes the following generalizations: (i) where a stem does not alternate for number, there will be ‑ni suffixation (with two exceptions); (ii) zero suffixation is possible only when there is a stem alternation; (iii) there is a weaker association between stem alternations for case and suffixation with non-alternating stems preferring suffixation for genitive and locative ( per cent), while alternating stems prefer zero suffixation for both genitive and locative ( per cent). Using a defaults-based approach Baerman shows that Nuer need not be seen as that aberrant. A key observation is that the default must be sensitive to morphological context. Frank () originally identified a default class of suffixation pattern (‑kä in genitive/locative singular and ‑ni throughout the plural). Frank’s default class excludes  per cent of the lexicon. However, if the lexicon is modelled so that the sensitivity to stem alternation is taken into account, only  per cent of the lexical items are excluded from the default class. This actually puts Nuer on a par with a language like Russian where the default inflectional class for nouns accounts for about  per cent of the lexicon (Brown et al. : ; Zaliznjak ). Baerman’s analysis shows that the Nuer case–number suffix system is the product of a system of rules, where much of the patterning in the affixes can be predicted by what is happening in the stems. The formal Network Morphology implementation using defaults has the virtue of allowing us to quantify how successfully the analysis accounts for the Nuer system. We turn now to a Network Morphology account of diachronic change in the Greek nominal system.

.. Diachrony Collier () examines changes in the Greek nominal system from the classical language to modern Greek. It is reasonable to assume that the inheritance network that represents the inflectional structure of the language is not passed down from one generation to the next.

Baerman (: ) points out that there are actually twenty-five classes, because there is one aberrant occurrence of ‑kä in the plural. 11

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Instead, each generation acquires its own inheritance structure on the basis of the observable patterns of inflection (Collier : ).12 Collier proposes a typology of incremental alterations to the inheritance-representation in order to capture changes over time. The typology he creates contains different mechanisms for historical change. These can be divided into changes associated with rules and those associated with nodes. Changes related to rules i. rule change ii. redundancy deletion iii. rule insertion iv. rule relegation v. rule deletion vi. rule promotion Changes related to nodes vii. node insertion viii. node division ix. node merger x. node realignment Rule change involves essentially the same inheritance structure and nodes but with a particular rule altered to effect a change in the exponent used. Redundancy deletion is reserved for the situation where certain morphosyntactic values are lost (e.g. the loss of a case distinction). The rules associated with these morphosyntactic values are simply deleted. As Collier (: ) notes, however, redundancy deletion may not always be straightforward. If, for example, realization of the lost morphosyntactic values was the only thing that distinguished two inflectional classes, then merger of nodes may be required (see ‘node merger’). Where one of two inflectional classes that share a mother (a macro-class) innovate a new form and increase differentiation between them, this could be analysed as rule insertion, involving overriding the original default inherited by both classes from the macro-class. If there is no longer any evidence to assume that the original default rule remains a default, because it is associated only with one class, then this rule is subject to rule relegation and is associated directly with the class that still maintains it, rather than being inherited by that class. Rule deletion is one way of modelling generalizing or analogical processes and is probably quite important in historical change. Originally node A (the daughter) inherits from node B (the mother) but overrides one of the rules from node B. When rule deletion occurs, node A no longer overrides this rule and so, all other things being equal, the class described by node A is more like the one described by node B (Collier : ). Rule promotion, as its name suggests, involves the promotion of a rule from a daughter node to the mother. Collier (: ) notes that, all other things being equal, we should expect rule deletion to be more prevalent than rule promotion, since rule deletion 12 The value of the Network Morphology approach is not to make direct claims about the nature of speakers’ cognitive structures. Rather it is to make generalizations that would appear to constitute substantive aspects of speakers’ knowledge of morphology.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Table .. Change in the ā-stem class, so that () is more like the o-stem class

nom sg acc sg gen sg dat sg nom pl acc pl gen pl dat pl

ā-stem class (1)

ā-stem class (2)

o-stem class

-ā -ān -ās -āi -ās -ās -ōn -āsi

-ā -ān -ās -āi -ai -ās -ōn -ais

-os -on -ou -ōi -oi -ous -ōn -ois

Source: Collier (: ).

corresponds to the replacement of a minor pattern with a major one, while rule promotion is typically the opposite. When inflectional classes split, this could be analysed as node insertion where a new daughter node is created that represents a new inflectional class, with the original class remaining the mother of that class. Alternatively, node division is another analysis where a virtual class node is created.13 Where inflectional classes merge, this can be expressed by node merger. On the other hand, node realignment is where changes mean that a node may switch the class from which it inherits. This could be because phonological changes have led to it naturally being reanalysed as a subtype of another class, rather than its original class. Some of the change types outlined above play little or no role in Collier’s analysis of Greek, while others are much more prominent. To give one example, the position of the ā-stem class alters over time so that in the Attic period it is treated as inheriting from the o-stem class (Collier : ). The dative plural and nominative plural forms become more like those of the o-stem class and this is treated as a change brought about by rule deletion. That is, specific rules for the dative and nominative plural of the ā-stems (see () in Table .) are deleted so that ā-stems inherit the relevant rules from the o-stems. The nominative plural rule has ‑Vi as the exponent and the dative plural rule has ‑Vis as the exponent (where V is the theme vowel for the class). This accounts for both the o-stem class and the innovated ā-stem (see () in Table .). Collier (: ) also notes the importance of node realignment and the alteration of hierarchical relationships to the overall analysis. As Network Morphology relies on implementation and explicit formal analysis, it is a good means for formulating clear analyses of historical change.

13

A virtual class node is one that represents generalizations shared by more than one class, but which itself is never instantiated by an individual lexical item.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. C

.................................................................................................................................. Network Morphology has default inheritance at its heart. Its use of defaults means that it is an ideal framework for dealing with different degrees of regularity, allowing us to see default or general properties that are associated with the core part of the morphological system. It embraces computational implementation to allow theoretical claims to be checked. This is an extremely important aspect of the theory, because often small changes to account for the pieces of theoretical interest may have unforeseen consequences. The ability to test out analyses by determining how often generalizations have to be overridden is a particularly useful means for validating theories empirically, as we saw with the case study for Nuer. Computational fragments exist for languages belonging to a wide range of families.

. R

.................................................................................................................................. A bibliography of Network Morphology works is maintained at the following address: http://www.smg.surrey.ac.uk/approaches/network-morphology/bibliography/ There is also a website associated with the book by Brown and Hippisley (), available at the following address: http://networkmorphology.as.uky.edu/

A With thanks to the European Research Council (grant ERC--AdG- MORPHOLOGY). §.. is shared with Brown (). I would like to thank Greville Corbett, the editors, and two anonymous reviewers for very helpful comments on earlier drafts.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

   ......................................................................................................................

 

. B

.................................................................................................................................. W Grammar (WG) is a formalized cognitive theory of language first reported in Hudson () and developed in several publications since (Hudson , , ; Gisborne ; Eppler ). The main claim of WG is the ‘Network Postulate’ (Hudson : ), the claim that language is organized in a symbolic cognitive network. The theory of morphology presented here is, therefore, embedded in a larger theory of language structure which also makes particular claims about language and the mind, which are consistent with the current state of knowledge in cognitive psychology. The nature of the larger theorizing about language and about cognition places particular constraints on the approach to morphology, but it also ensures that the theory of morphology is consistent with a coherent theory of grammar which has a number of research results (Gisborne , ; Hudson , ). The leading developer of Word Grammar has been Richard Hudson, with contributions to the morphology from Chet Creider (Creider and Hudson ), the morphology–syntax interface from And Rosta (), and other contributions from Hudson’s former students, including the author of this chapter. Word Grammar is structured around two central premises. First, categorization is formalized in Default Inheritance hierarchies, in a version of DI which allows multiple inheritance. The primitive relation of default inheritance is the ‘Isa’ relation. (If x Isa y, then x is an instance of y so, for example, participle Isa verb.) Therefore, Word Grammar morphology has architectural similarities to Paradigm Function Morphology (Stump  and Chapter  this volume) and Network Morphology (Brown and Hippisley ; Brown, Chapter  this volume), both of which also have Default Inheritance-based architectures. All concepts, including words, their meanings, and their parts, are classified. The second premise is the Network Postulate mentioned above which adds associative links to the classification relationships. Associative links are those which relate concepts to each other syntagmatically, and include Grammatical Functions such as Subject and Object, as well as other relations. Each associative relationship is, in turn, analysed in terms of the primitive

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

relations ‘argument’ and ‘value’, and each part of the grammar—syntax, semantics, and morphology—is analysed only in terms of this small set of primitive relations, Isa, argument, and value, augmented by three further primitive relations: ‘identity’, ‘or’, and ‘quantity’ (Hudson : ). These notions are discussed and exemplified in the discussion of Figure . in §.. However, these primitives do not define the grammar. They define the formal architecture needed to write a grammar. The grammar proper is a series of interacting classified networks which represent the knowledge a speaker has of their language(s). On this account, the grammar consists of the speaker-hearer’s knowledge of the categories and relations that make up the language system. WG is, therefore, a knowledge representation theory, where the bit of knowledge that is being modelled is linguistic knowledge. Like other knowledge representation theories, it is declarative; like other cognitive theories, it is strictly intra-mental or conceptualist. Some aspects of the theory have remained stable since it was first advanced in print in Hudson (). For example, although various specific syntactic and semantic analyses have changed, and in turn brought about revisions to the overall theory, the current syntax and semantics of WG are fundamentally the same as in . However, there have been changes in the morphology. Hudson (: ) asserts that WG rejects the distinction ‘between the levels of morphology and phonology’. More recently, WG has reinstated this distinction, with morphology understood as a separate domain of grammar, distinct from syntax, phonology, and semantics. Neither Hudson () nor Hudson () had much to say about morphology beyond English, which has a notoriously impoverished inflectional system (although it presents a rich set of topics for research in its derivational morphology), but in other work, especially since , there are analyses of Serbo-Croatian clitics, French fused prepositions and articles (du for ‘de le’), French pronominal affixes, mixed categories such as gerunds, Slovenian nominal inflection, the morphological gap in *amn’t, and Semitic infixation. In this chapter, I follow the general pattern of chapters for this section of the handbook. The next section, §., lays out the basic structures of WG morphology; §. lays out the subparts of morphology; §. discusses morphology at the interfaces, here focusing on morphology, syntax, and the lexicon; §. looks at productivity and blocking—this section is brief, as there has been little written in the theory on these topics; §. looks at the relationship between morphology and extra-systemic linguistics, such as issues in variation and change; and §. presents an evaluation of the theory.

. B 

.................................................................................................................................. Word Grammar morphology is word-based rather than morpheme-based, and interpretative rather than generative. Both aspects of WG morphology are inherent to the theory’s network design, although it might be possible to devise a WG-like theory that was morpheme-based. However, in being interpretative rather than generative, WG is limited by the nature of the knowledge-representation system. It is a declarative theory with no procedures or algorithms (Pullum ) so it has none of the design features of

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



generative-enumerative theories in Pullum’s sense. Jackendoff and Audring (Chapter  this volume) discuss morphology in the Parallel Architecture, and adopt the same premises of mentalism and declarativeness. In terms of the typology of theoretical space spelled out in Stump () for inflectional morphology, WG is a realizational and inferential theory. The theory’s analysis of inflectional morphology belongs to the family of Word and Paradigm (WP) theories, so it is a word-based, or more accurately, lexeme-based theory of morphology. In addition, WG rejects classificatory features such as [N, V] (see Baker : –), instead relying entirely on classification with Default Inheritance.1 The alternative to a feature is a morphosyntactic category, which is really a type of word-class, so Plural is just a subclass of Noun in this theory. Perhaps the best way of showing a WG analysis is to walk around a WG diagram. The Network Postulate gives rise to analyses such as that in Figure ., an analysis of She runs. The diagram shows the relations among (parts of ) the syntactic information and morphological information that are necessary for an analysis of this very simple sentence. I have left semantic information out of the diagram, because it adds a level of complexity which does not advantage the discussion here. In the diagram, the upside down triangles show the relationship between a superordinate category and the category that inherits from it, and the arrows represent the argument–value relations which are typically called ‘dependency relations’ in syntax, and ‘semantic roles’ in semantics. As the diagram shows, the same formal argument– value relations exist in morphology: the form {runs} is the stored morphological realization (here called the ‘Fully Inflected Form’) of a morphosyntactic cell which inherits from three different categories: the lexeme  (small caps denote lexemes), the category Tense, and the category third person singular. It is also in an argument–value relation with its run

present

3.ps

pronoun

subject she realization {she}

runs realization {runs}

base {run}

Fully Inflected Form {runs}

s-variant

 .. She runs

WG does accept ‘features’ in its discussion of agreement—these are called Agreement Features in Hudson ()—but these features do not classify syntactic entities. This follows from the architecture: a WG network is a classified network where each network link is an attribute. Classificatory nodes are part of the type hierarchy, whereas agreement features are attributes. I discuss agreement in §.. 1

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

morphological base, the form {run}. In WG notation, morphs—the preferred term, because of the supererogatory meanings the word ‘morpheme’ is freighted with—are represented in braces, as in Figure .. Figure . shows the Subject relation between the word runs and the word She. The form runs, in italics, inherits from the node which Isa , Present, and Third Person; it has a realization which Isa the stored morphological realization {runs}. In this way, WG preserves strict lexicalism. Although nothing ‘enters’ the syntax, as such, in WG, because it is an entirely declarative theory, the objects that are related by dependencies are morphosyntactically complete words, whose form, lexeme, and morphosyntactic features are present in a single entity. The Subject relation shows us the syntactic relationship between this word and the other words in the sentence; and the morphological relationships Base, Fully Inflected Form, and Realization tell us about the relationship between the form runs and the cell in the paradigm. Where morphology differs from syntax is that its categories of elements and relations are different from the categories of elements and relations in syntax; there are explicit network relation links between the different subcomponents of grammar, such as the ‘realization’ relation between abstract morphosyntactic categories and word forms, but each domain of grammar makes up a discrete sub-system within the overall network. Therefore syntax involves hierarchical taxonomies of words and lexemes, and dependencies. WG does not distinguish between lexical entries and abstractions over sets of morphological forms, calling both lexemes: the lexical entry is directly associated with its morphological realizations. Inflectional morphology involves Bases, and Variants, and Fully Inflected Forms. In this way, WG assumes an autonomous morphology. The subparts or units of morphology are combined in their own distinct system and so WG is morphomic (Aronoff ; Hudson : )—some parts of the language system are irreducibly morphological. Moreover, inflection and derivation are ‘sharply distinguished at the level of words, but not at the level of forms’ (Blevins ). That is, what differentiates inflection and derivation is how they relate to the morphosyntax. The same form, say {‑ed}, can be inflectional with respect to Verbs, and derivational with respect to Noun>Adjective conversion, as in the example where the adjective  is derived from the noun . It is not the form that matters: WG postulates that it is just the case that the same form is recycled in these two instances. As I have said, the basic units of morphological analysis are morphs—the meaningless parts of words which realize the inflectional categories and the derivational relations. Neither inflectional nor derivational morphology relies on meaningful morphemes. The basic building blocks of syntax are words, fully fitted up with their morphosyntactic features: as we saw in the discussion of Figure ., the syntax cannot see inside words, and it is not involved in putting parts of words together. This, then, gives us the basic dimensions of a WG theory of morphology: it assumes that morphology is entirely and properly autonomous, and that it should not be analysed as part of any other dimension of linguistic structure. Nor should a morphological analysis be presented in order to meet the exigencies of a syntactic problem. As a result of this strict separationalism, it is also claimed that the same morphological rule can apply in both derivation and inflection. I explore this decision and its consequences in the next section.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



. T   

.................................................................................................................................. In this section, I discuss derivation, inflection, and compounding—the latter very briefly as there has not been a great deal of work. But first, I look at how morphs combine, and how the combinatorics of morphology are separate from the relationships among lexemes, and relationships among paradigmatic cells. One major issue for the theory of morphology is how the subunits of words are combined. I describe the WG approach briefly here, but return to it in §. below, where I address the theoretical concerns of Blevins (). There is a syntax of morphs in WG. Hudson (: –) presents an analysis of the Latin docuerimus (‘we will have taught’) which treats each subpart of the word as a concatenation of elements. There are two parts of the analysis. • The creation of ‘variants’ which define the ‘base’ of each subsequent form. • The relationships between the realizations of each element. I shall not repeat Hudson’s account of docuerimus because it is already in print. But it is worth working through the idea of variants and how morphological formatives are combined in some detail, because they show how a WG analysis is structured. Essentially, the formatives that make up word structures have an analytic structure, and there are rules to the combinatorial system. Variants are a subtype of form. For example, {dogs} is the ‘s-variant’ of {dog}. Because WG is morphomic, s-variants are not just restricted to plural forms of nouns—taking the verb , {digs} is the s-variant of {dig}, just as {dogs} is of {dog}, for all that the form is the Fully Inflected Form of an altogether different morphosyntactic cell which is the exponent of the third person singular of verbs and not the plural of nouns. Although neither {dogs} nor {digs} has a form that builds on it, WG morphology is concatenative, so in principle a variant is just another form which can serve as the base for a further form. To take another example, {cooker} is the ‘er-variant’ of {cook} and it has its own s-variant, {cookers}. So too, of course, does {cook}. Note that the relationships between cooker, cookers, cook, and cooks show that there is a necessary split between syntax and morphology, and also between semantics and morphology. Both forms, {cook} and {cooks}, are Fully Inflected Forms of both the Noun  and the Verb ; therefore, both the Noun and the Verb have s-variants, and the morphological rule, which I give in Figure ., is the same in both cases. It is the relationship of the morphological form to the morphosyntax that is different in these two cases. There is a similar example with the form {cooker}, which is the er-variant of {cook} even though it is not the ‘agent noun’ of the verb /. The form {cooker}, which is the realization of the Noun , also has an s-variant and so the complex form {cooker} is the base of the form {cookers}.2 By default, agent nouns are realized as er-variants (see the relationship between the verb  and the noun  for example) but the agent noun of the Verb  is the Noun /. In WG, there is strict separation of phonology, morphology, syntax, and semantics which allows the same morphological rule to create new lexemes, as well as inflectional variants. 2

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  X Fully Inflected Form base

realization

s-variant Part1

Part2

realization {s}

 .. The analysis of s-variants

The diagram in Figure ., which presents an analysis of s-variants, allows us to explore these notions in some detail. The diagram just gives the morphological rule. It does not relate the morphological rule to the lexicon or the syntax. Figure . presents a schema for the morphological analysis of both third person singular present tense verbs and plural nouns—in the diagram represented by the italicized capital X. The morphological structure of both is the same although the morphosyntax is different in each case. The diagram says that X has a Fully Inflected Form. This is the form that realizes the morphosyntactic cell that X fills. So where X stands for the category Plural Noun, the Fully Inflected Form is the form ending in ‑s , as it is for the situation where X stands for Third Person Singular Verb. The Fully Inflected Form for the relevant morphosyntactic cell is the s-variant of the morphological base of the item. Thus the relation Fully Inflected Form is a subtype of the Realization relation: the Fully Inflected Form is the morphological form which realizes the (abstract) morphosyntactic cell. Realization relations link subparts of the grammar, in this case (morpho-)syntactic information with purely morphological information. The s-variant relation, on the other hand, is a morphological relation, which relates two different types of morphological information. So, what are the two ‘realization’ relations which point to the same dot, and what do ‘Part’ and ‘Part’ mean? Let us take the part relations first. They analyse the form—say, {dogs} or {digs}—as being composed out of an initial part, which shares the realization of the base, and a second part, which Isa {s}. Because this is all within morphological structure, {s} is a morph which has its own realization, /s/. This analysis invites the questions, ‘Why does Part of {dogs} share the realization of its base with {dog}?’ and ‘Why is it not just the case that the Part of {dogs} Isa {dog} or just be {dog}?’ The second question is easy to answer: the Part of {dogs} cannot just be {dog} because the Part of {dogs} is always followed by {s} and {dog} is not. The first question is also straightforward to answer: the Part of {dogs} cannot inherit from {dog} because if it did there would be infinite regress: it would also inherit the fact that {dog} has an er-variant with a Part and a Part. I develop this account in the next section where I discuss the relationship between morphology and its interface with syntax by looking at the morphology of ‑er and the morphosyntax of  and .

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



comparative big Fully Inflected Form

base {big} realization

er-variant

{bigger}

Part1

Part2

realization {er}

 .. The morphology of the comparative form of 

This sketch shows that morphology in WG is autonomous, that it involves the recycling of parts, and that the basic relations of WG morphology (Base, Fully Inflected Form, Part, Part, and the X-variant relations) are largely all that is needed for a full account of morphological composition. WG morphology is fully declarative and compatible with the larger theory of grammar that embeds it. We can explore the relationship between morphology and the lexicon, and the relationship between derivation on the one hand and inflection on the other, by looking at two different ‑er forms: {bigger} and {teacher}. In the case of {bigger}, ‑er is an inflectional affix: {bigger} is the comparative form of the adjective , and grade is inflectional. In the case of {teacher}, the affix is derivational. A teacher is a person who teaches for a living. T is the agent noun derived from the verb . The WG diagrams for {bigger} and {teacher} are entirely similar in their analyses of morphological structure just as the morphology of {dogs} and {digs} is the same. For {bigger} and {teacher}, the differences in the diagrams lie in the differences between inflection and derivation; for {dogs} and {digs}, it is in the difference between nominal and verbal inflection. Figure . is a diagram for {bigger} and Figure . is a diagram analysing {teacher}. The main difference is that in the diagram for {bigger}, there is only one lexeme, with the form {bigger} being classified by two different Isa relations, one showing that the comparative of  Isa , the other showing that it Isa Comparative. In the diagram for {teacher} there are two lexemes, linked by the relation ‘agent-noun’. Figure . shows that the morphology proper is the same in both cases. For both {bigger} and {teacher}, the form is the er-variant of the base of  and  respectively. Therefore, as explained above, the differences between inflection and derivation lie in the relationship between the morphology and syntax. As we see from Figure ., the form {bigger} is one of the possible forms of the lexeme . But in Figure . we see that the form {teacher} is not a possible form of the lexeme . Instead, it is the form of the lexeme . Figure . makes two different things explicit. First, the derived word is a separate lexeme which is shown by being labelled in capital letters and classified. Second, it is in a

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  Noun Verb agent-noun teacher

teach base

base

er-variant

{teach}

{teacher}

Part1

realization

Part2

realization {er}

 .. The morphology of the agent noun 

particular relationship with the word that it is derived from, the relationship called ‘agentnoun’. This relationship is factored out from the morphological structure because not all agent nouns have the ‑er morphology. The relationship ‘agent-noun’ is an argument–value pairing between lexemes which by default states that the agent inherits the {‑er} morphological structure. But this is a default which can be overridden. For example, consider the pairs – and /–/. The two lexemes are also shown to be in different word classes which are categories that the lexemes inherit from. Each lexeme has its own relationship to its morphology, so the base of  is {teach} and the base of  is {teacher}. But Figure . also captures the fact that {teacher} is the er-variant of {teach} just as {bigger} is the er-variant of {big}. In the case of {bigger}, note too that it is the Fully Inflected Form of /comparative. This shows that {bigger} is in a relationship with , which is right because it is the morphological realization of a subtype of . This, then, gives us the broad lineaments of the differences between derivation and inflection in the theory. However, some issues remain. • How are cells in a paradigm defined? • How are derived items related? I have implied an answer to the first question by using multiple Default Inheritance in my figures, but I have not explained it. This question also leads to questions of how morphosyntactic features are handled, and how agreement is analysed; I address morphosyntactic features and inflection below, and agreement in §.. The question about derivation is fairly straightforward to answer, and I take it first. The model in Figure . works as a reliable model for derivation. Derived forms are explicitly linked to the words that they are derived from by a specific relation which can be applied consistently across all of the instances of this kind of derivation. The derivation relationship is not tied to any particular morphological realization: some particular realization patterns will be more general than others, but the derivation relationship is not

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



Noun Verb abstract-noun ment-variant base

realization

Part1

base Part2

realization {ment}

 .. The schema for {-ment}

‘in the morphology’; it is in the relationship between the lexemes, which is then associated with a particular morphological pattern. We can see a range of different derivational relationships all of which will work in broadly the same way. For example, let us consider nouns ending in {‑ment}. Modulo the fact that {‑ment} does not feature in inflectional morphology, this affix is subject to the same general kinds of constraints that apply to any affix. Figure . presents an analysis of how {‑ment} works, which shows the broad similarities with other abstract nouns and with accretive morphology more generally. Note that in Figure ., no names are given. The diagram states that it is possible to create an abstract noun which has the base of a verb as its Part and {‑ment} as its Part. This is shown to be a derivational rule by the labelled relationship between two lexemes—the two lexemes being the blobs classified Verb and Noun respectively. The key fact, then, about Word Grammar revealed in this diagram is that there are explicit relationships between lexemes, both of which carry syntactic and morphological properties, and also, because of the different kinds of Variant relation, there are relationships within the morphology as well. We have already seen how cells in a paradigm are defined: by multiple inheritance. In Figure ., there is a node, represented by a dot, which Isa  and which also Isa Comparative. This node corresponds to an inflectional cell in a paradigm. This inflectional cell is associated with more complex syntax, and more complex semantics than the word , and this additional complexity is inherited from the category Comparative. It is Comparative which sets up the semantics of comparison and it is also this category that licenses the word . In the case of a string such as he is bigger than her, the word bigger inherits its dependent than from the category Comparative. The morphology is just the same as the morphology of : it is the morphology of adding ‑er to a morphological formative. The presence of the phrase headed by than is just syntax and semantics. The same model works modulo any other inflectional system. The cells in the relevant paradigm are established by cross-classification by Default Inheritance, and then the morphology realizes those cells. The morphosyntactic contrasts are part of the stored information associated with any given word, inherited at the appropriate level of generality.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

That is, multiple inheritance defines a number of morphosyntactic contrasts which are in turn realized by the morphological objects whose structure I described in the previous section. Therefore, the model presented in Figures . and . will work just as well for a much more complex inflectional system, such as the Russian noun paradigm in Figure ., which shows part of Table . from Gurevich (: ). Table . presents a fragment of Russian nominal declension which raises a number of issues about how WG analyses the different dimensions of inflection. The questions are (i), how are the cells in the paradigm defined? And (ii), how are the formatives that ‘fill’ those cells related to each other? The description needs to define each cell, and it also needs to capture the syncretism evidence in the Singular Accusative and Genitive, as well as the (different) syncretism in Plural Nominative and Accusative. It also needs to account for the morpho-phonological relationship between the Dative Plural and the Instrumental Plural. We can do this by building on the very simple theory of inflectional morphology presented in Figures . and .. A diagram for the Singular and Plural Nominative and Accusative is presented in Figure ., where each cell in the paradigm is defined by multiple inheritance. We can define that paradigm at the right level of abstraction so that it applies to all of the Masculine Class a nouns rather than specific lexemes. The specific lexemes will then inherit the paradigm. I limit the analysis to the first two cases because the diagram is rather untidy,

acc

e tiv ina m no ve c1 ati us

1a (masc.)

Plural

c2

c3

c4

 .. Defining paradigm cells without features

Table .. Russian Class a (Masculine) nominal declension a (masculine)

Declension

      Gloss





stol stol-a stol-a stol-u stol-om stol-e

stol-y stol-y stol-ov stol-am stol-ami stol-ax ‘table’

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



with a lot of crossing lines. That is not a fault in the theory: it is because even this small fragment of nominal inflection defines a complex network. Once we have defined each of the paradigm cells, the rest of the information that is required is strictly morphological. This includes the rules of realization, and rules relating forms to each other within the paradigm. The diagram makes a claim about the difference between Plural and Singular which I discuss in §.. By default, Nouns are Singular; Plural is an additional classificatory node in the type hierarchy, so the difference between a Plural Noun and a Singular one is that Plural Nouns Isa both the Noun and the category Plural. The different labels in the diagram c . . . c are the addresses of cells in the paradigm. For example, c is the cell occupied by stol, which is the Masculine, Nominative, Singular cell in the paradigm of a Class a noun; c is the address of the Masculine, Nominative, Plural cell occupied by the form stol-y. Each cell, c to c, is defined by multiple inheritance. For example, c inherits from three different categories: Plural, Accusative, and Class a. This tells us about an important fact in WG: Plural and Accusative are not treated as features. They are instead analysed as (abstract) categories on the same level as lexemes, and classes of lexemes. The same analysis works throughout the paradigm, with each cell being analysed as a node in the network which inherits from the relevant abstract categories. It is important to represent the class of Noun in the analysis, because the realizations of each of the cells are determined with reference to which class a given Noun belongs to. Each node has its own realization relation. Nodes that have the same realization are linked to the same morphological rule; thus c and c in the diagram each have a realization relation (Fully Inflected Form) that links them to the same morphological structure—the rule which has {stol} as its base, and {stoly} as its ‘y-variant’. This is another example of how WG’s factorization of the realizational nature of morphology from syntactic and morphosyntactic nodes allows the same rule giving forms to be exploited in more than one case. Here, the Fully Inflected Form is the same for both c and c. As Hudson (: ) shows, it is the full morphological pattern that is shared, including the relationship to the stem and the y-variant. In this way, WG does without both underspecification and rules of referral (Zwicky a; Stump a). To finish this section, we should turn to compounding. Hudson (: –) presents a threeanalysis theory of compounding. The main argument is that these three theories of compounds apply in different cases, so the analytical error is to assume that there is a single theory of compounding. We can summarize these three theories briefly here. Hudson asserts (): [F]or example, the lexeme  is related to the lexemes  and . However, there are at least three ways in which one lexeme may be related to two others. . It may simply share their forms—for example, the base of  is {matchbox}, which consists of (instances of ) {match} and {box}. . It may be a combination of the two lexemes themselves—so  is actually a syntactic combination of box and match,3 stored as a combination, just like any idiom, but each contributing just its ordinary form.

3 These forms are a way of stating that there is a special subtype of  which requires a head , which Hudson has called box, and that there is a special subtype of  which requires the dependent , which he has called match.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  . These two analyses may be combined, so that match is stored with the unique form {matchbox}. It seems likely that each of the three analyses is correct for some kinds of compound but not for others. (Hudson : )

The main point of these three positions is that compounding could be ‘in the morphology’—that is, it can be a morphological rule that is not sensitive to the lexical entries of the items that contribute their morphological structure to the compound—or it can be ‘in the lexicon’ where a compound lexical item is stored, and the morphology follows; or it could involve a hybrid structure which involves both lexical storage and a morphological rule. Hudson argues that the first kind of compound structure is appropriate for dvandva compounds such as A L and for forms such as  which have the phonology of a single word, as opposed to black bird; that the second is the appropriate analysis for two-word idioms which have the phonological contour of two discrete words as in   and  , for collocations such as  , and for stored combinations such as   or other similar streetnames; and that the third analysis is appropriate for situations where the phonology and the morphology are both those of a single word. For example,  has single word stress, and German  does not show normal inflection on the adjectival part—if it did, the appropriate form would be grosse Mutter. In summary, there is a WG theory of inflectional morphology and a WG theory of derivational morphology, which show how derivational and inflectional morphology are the same at the level of the strictly morphological rules that account for word structure, but which differ in terms of their association with paradigms in the case of inflectional morphology, and their association with lexemes in the case of derivational morphology. In addition to discussing the nature of inflection and derivation, I have also sketched an account of how compounding is handled in the theory.

. I

.................................................................................................................................. Of the several interfaces that morphology could interact with, WG has only addressed the morphology–lexicon and morphology–syntax interfaces. There has been little or no work on the morphology–phonology interface, largely because there is little WG work on phonology. However, there are theoretical decisions which affect the possible interactions among sub-modules of the grammar. For example, the morphomic nature of WG’s theorizing about morphology entails that there is no theorizing about the morphology– semantics interface, or the morphology–pragmatics interface, because it is hypothesized that there are no such interfaces. The nodes that morphology realizes (such as / Comparative) are not in the morphology but in the lexico-grammar, and it is these abstract nodes which have a semantics. For these reasons, in this section I limit myself to a discussion of the morphology–lexicon and morphology–syntax interfaces, in particular, the WG treatment of clitics and agreement. We start with agreement. Traditionally there are claimed to be two dimensions to agreement. On the one hand, there are patterns such as this dog vs. these dogs, where the determiner and the noun agree

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



with each other within the same phrase. For languages with a more articulated set of morphological distinctions than English, such as Russian or Latin, there is agreement between adjectives and nouns within NP, across the dimensions of grammatical function and gender as well as number, so any theory of the morphology–syntax interface has to be able to account for these data. The other dimension of agreement concerns phenomena like Subject–Verb agreement, where the Subject and the Verb are not in the same phrase. However, this distinction does not apply in a dependency theory such as Word Grammar, because phrases are not primitive, and there is a direct dependency link between Verbs and their Subjects. I said in §. that WG does not use classificatory features, and I have shown that all morphosyntactic distinctions are shown in the categories that nodes inherit from. WG does, however, use ‘agreement features’ which are not classificatory, but which are part of how the theory handles word–word relationships—see Hudson (). I would prefer to use the term ‘agreement attributes’ for Hudson’s ‘agreement features’, because in a formal sense the arrows in a WG diagram are the attributes of an Attribute Value Matrix, and because the term ‘feature’ is inherently confusing: number is not a feature but a classification in WG, but we have a number ‘agreement feature’. So, with apologies for cluttering up the terminology, in this chapter I talk about ‘agreement attributes’. The argument in §. was that cells in a paradigm are defined by nodes in an Isa hierarchy, which can inherit from more than one node, including both the node which is the conceptual address for the relevant lexeme and the node for the relevant morphosyntactic category, such as Plural. At this point, we need to add a little complexity to the system. I argued in §. that, by default, a Noun was Singular, and that a Plural Noun was in a non-default category which inherited from the Noun itself and the additional category Plural. But we have not yet defined ‘Plural’. As with all other categories, the node Plural is defined by the links that it supports. Plural is more specific than Singular, and it has a Number link, with a value: >. This is shown in Figure ., which adapts Hudson’s (: ) Figure ..4 Once we have a definition for Plural, we can see how agreement between a Noun and an Adjective will work: both elements will share their number value, as in the diagram in Figure . which shows that agreement is simply a case of the two elements in a dependency relationship sharing a value for their agreement attributes. The main claim in Figure . is that ‘Plural’ is a category, from which any count noun can inherit, which is defined by having a Number value greater than one (there being a simplification modulo languages with a Dual system). The relation Number in the diagram has exactly the same status as any other attribute, and the claim in WG is that agreement features are just attributes such as the Number attribute in Figure ., and that they exist only when the grammar needs to have recourse to them. In WG, agreement is between the Number attribute which both the Adjective and the Noun have inherited in virtue of being classified ‘Plural’. For example, the Russian for The values I have given for the number attribute in Figure .— and >—have the deficiency of being semantic. Hudson (: ) uses ‘singular’ as the number value for a default noun and ‘plural’ as the value for a plural noun, but I think that it is worse to use the same label for both the argument and the value of a particular attribute, so I chose the suboptimal labels in the diagram as a way of keeping each concept distinct. 4

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  Number Noun

l

Number Plural

>l

 .. Plural as a morphosyntactic feature

number

number adjunct adjective

noun

 .. Russian Nouns and Attributive Adjectives agree

‘new cars’ is nov-ye avtomobil-i (from Corbett : ); it is not the forms that agree, but the morphosyntactic cells occupied by the Noun and the Adjective. As in Corbett’s (: ) analysis, WG assumes that agreement involves an element that controls the agreement, and an element whose form is determined by agreement (the ‘target’). The WG rule for ensuring agreement is a declarative statement in the network that two items share the same value for the number feature. A version of that rule for adjectives in Russian is given in Figure . (see also Hudson : –). On its own, the rule in Figure . is very simple: it just says that the number attribute on a noun has to be shared on its attributive modifier. The rule does not imply any directionality, and it is just part of the rule for attributive modifiers—a rule that is found in any language where attributive Adjectives agree in Number with their head Nouns. This is, then, a syntactic rule, with the morphology stated independently in the realization relationships which link the relevant Adjectives and Nouns to the exponence of the relevant Number feature according to their Declension class. The major addition to this theory that Subject–Verb agreement requires is a way of handling those cases where the number agreement in the morphology is mismatched with the number agreement that is required by the verb. For example, English has examples such as the committee are delayed, and fifty pounds is a lot of money where the number of the Noun, singular in the case of the committee triggers the ‘wrong’ number agreement on the Verb: plural. How do we handle these? The solution in Hudson () is to invoke a new relation, ‘Agreement Number’ , which can be mismatched with the actual number, and which accounts for the agreement properties between Subject and Verb in these cases. Thus committee has a Singular Number, but a Plural Agreement Number. By default, Number and Agreement Number converge on the same value, but there are cases such as this where they do not. This new relation has to share its value with a feature on the verb called ‘Subject Number’; therefore, Subject–Verb agreement invokes two new relations, which share a value. We can now turn to clitics which are notorious, and which present a number of problems, obliging us to acknowledge that there is no unitary phenomenon ‘clitic’. Spencer and Luís

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



(a) explicitly follow Zwicky () in suggesting that the term ‘clitic’ is an umbrella term for a number of properties which may or may not converge. As Halpern (: ) puts it, clitics ‘form a heterogeneous bunch, at least superficially, and exactly what is meant by ‘clitic’ varies from study to study’. Zwicky distinguishes between simple clitics, special clitics, and bound words. Anderson (: ) rejects the category of bound words as a discrete category. Following Anderson’s exposition, a simple clitic is an unaccented variant ‘of a free morpheme which may be phonologically reduced and subordinated to a neighbouring word. In terms of their syntax, though, they appear in the same position as one that can be occupied by the corresponding free word’ (Anderson : ). A special clitic has the same phonological properties as a simple clitic, but it has a special distribution. For example, Romance personal pronoun clitics occur pre-verbally in a fixed word order, whereas their corresponding free forms occur within the clause in the usual way. More radically, Wackernagel’s second position clitics occur in a clitic-specific position. The main fact is that clitics are like affixes in that they are not independently occurring forms, but unlike affixes, they appear to have a real presence in the syntax. Hudson () presents a WG account of the French pronominal clitics analysed by Miller and Sag (), which were called affixes by Miller and Sag, and Hudson (: –) discusses various clitic systems, including: reduced auxiliaries in English (which belong to Zwicky’s simple clitics); clitic pronouns in Beja; French pronominal clitics; and Serbo-Croatian nd position clitics (all special clitics). Gisborne () is an account of how the modern Romance future tense came about and how the relationship between a phrase-final auxiliary verb and the realizations of the future in (say) French should be understood. The complexities of clitics involve the different dimensions of linguistic structure that they are bound up with. If there are corresponding free forms, then one issue is how the relationships with the corresponding free form should be captured. Given that they are apparently phonological reductions of the free form, we might ask what the dimensions of phonological reduction look like. From a morphological point of view, are clitics part of the morphological system, or do they belong somewhere else? Here, I give a brief overview of Hudson’s () analysis of French pronominal clitics (see also Bonami and Boyé ). The relevant data are given in (), taken from Hudson (, examples –). ()

Il ne le lui y donnerait pas. he not it to-him there would-give not ‘He wouldn’t give him it there.’

()

Il ne te le donnerait pas. he not to-you it would-give not ‘He wouldn’t give you it.’

() Il y en mangerait. he there from-it would-eat ‘He would eat some of it there.’ The clear examples of the clitic pronouns are il, le, and lui, a Subject, Object, and Indirect Object pronoun respectively. Rather than going into the full complexity of the system, let us focus on le and lui. There are two important facts:

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. They occur in a fixed order. It is possible to have the pattern il le lui a donné but not the pattern *il lui le a donné, so the pronominal clitics le and lui occur in a fixed order relative to each other. . They cannot co-occur with the argument or adjunct which they ‘replace’, so example (), from Halpern (: ), is ungrammatical. ()

*Jean le voit le livre. Jean it saw the book ‘Jean saw it the book.’

The theorist has two tasks: to explain the fixed pattern of the pronominal clitics—which makes them look like affixes, and is one of the reasons motivating Miller and Sag’s () treatment of them as such—and to explain how they are ungrammatical with NPs expressing the same grammatical function. Miller and Sag’s () solution is to say that they are affixes which reduce the valency of the lexeme, so the different combinations of host and affix are all listed in the lexical entry, which extends to the periphrastic forms, thereby allowing for clitic climbing, the phenomenon where the clitic is realized not on the verb form itself, but on the auxiliary verb it depends on. The WG approach is different: it treats clitics as genuinely mixed forms which are affixal in the morphological structure and syntactic in terms of the inter-word dependency relations. That is, WG treats the clitics, even special clitics, as a species of linguistic entity in their own right: a syntactic unit realized directly by affixes. The claim is that this is a more cognitively plausible analysis for two reasons: first, because the relationship between the affix and the full NP is fully transparent; second, because clitics need not be valents of the verb (cf. Miller and Sag )—for example y is often an adjunct, and en can be a valent of a valent as in il en mange beaucoup (‘He eats a lot of it’). Zwicky presents a descriptive embarras de richesse of clitic types, and says that it is ‘to be expected in “highly modular” theories, those positing a number of grammatically significant modules, components, or strata’ (Zwicky : ). Zwicky also points out that such theories will be able to present a number of different analyses, of multiple-strata phenomena such as clitics. One of his desiderata is that the theory’s predictions should restrict the set of possible analyses, and that the highly modular approaches which tolerate a range of possible analyses should embody principles which lead to one or other analysis being preferred or dispreferred. In WG, the relevant analytical structure is the ‘Hostform’, which is a templatic structure in the morphology which a French verb can access when there is a clitic structure because the Hostform is carried by the clitic itself. The Hostform can be seen as a mediating structure between the verb which inhabits it, the affixal structure which the clitics belong to in the morphology, and the morphosyntactic information that the clitics instantiate in terms of the verb’s valency structure. Hostforms are sensitive to their verb’s properties. This matters because the clitics which attach to an imperative verb in French are postverbal, not preverbal. The point is this: the WG approach to French pronominal clitics presents them as hybrid forms—actually there in the syntax, but realized as affixes in the morphology, with the relationship between the morphology and the syntax negotiated by an additional element of structure, the Hostform, which is only invoked in the case of a clitic structure. WG, then,

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



argues that it is possible for there to be hybrid structures across the dimensions of grammar, but it places strong restrictions on them, insisting on relevant mediating structures, which prevent there from being arbitrary cross-associations between the different dimensions of grammar. There is an apparent outstanding issue: how should this mismatch approach to clitics be reconciled with WG’s lexicalist stance on the syntax–morphology interface? The answer is that the syntax is not doing any morphological concatenation: it is the morphology alone in the structure of the Hostform which determines the ordering of the clitics. The mismatch is an artefact of morphology being its own combinatorial system (as it is in Sadock’s  theory, see ch.  for an analysis of clitics).

. O     

.................................................................................................................................. There is no WG work on productivity as a phenomenon, although it is possible to explain how a WG theory of morphological productivity would work. The more productive a pattern, the more general it is, and so productivity is understood in terms of the Default Inheritance architecture. To import a term from Cognitive Linguistics, the more general a pattern, the more instances it ‘sanctions’. Traugott and Trousdale (: –) talk about productivity in terms of Default Inheritance. Their view boils down to the idea that the more productive a particular construction is, the more schematic (general) it is—that is to say, the higher up the type hierarchy it can be found. The same position applies modulo the application of DI in WG to morphology. The main difference, for example, between regular and irregular inflection is that irregular inflection is ‘lower’ in the hierarchy and therefore less productive, whereas regular morphology applies throughout a system, and so is the default, and in virtue of being the default is more productive. There is a similar point to be made about blocking. The consensus view of blocking is that a more local instance blocks the application of a more general rule. This is easily stated in WG’s DI-based theory, because the relationship between specific cases and general schemata is made explicit. In the case of the form {gave} blocking {gived} (Embick and Marantz ), the reason is straightforward to state: {gave} is the exception to the default; as such, it blocks the application of default rules by being more specific. Word Grammar rejects the very notion of rules of referral (Zwicky a; Stump a). These rules are used to account for syncretism. For example a rule of referral might state that the perfect of an English verb derives from the same verb’s passive (or vice versa) modulo the fact that this applies to classes of verbs. Rules of referral are explicitly adopted in other WP models so, for example, Paradigm Function Morphology makes extensive use of them. However, rules of referral are also problematic (see Blevins : ) and WG makes do without them. Hudson (: –) makes two objections: • They are directional, and how are we to know which direction is the right one? • They are psychologically implausible. ‘Suppose we treat perfects as basic and derive passives from them; this would imply that it is impossible to recognize a verb as passive without first misanalysing it as a perfect.’

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

WG does without rules of referral by exploiting the system of variants that I introduced above in §.. What makes the perfect and the passive have the same form is that they both have a Fully Inflected Form which is the en-variant of their stem. This rule is stated declaratively and applies globally, so there is no reason for the rule of referral to apply in any circumstances. Likewise, WG does without rule blocks. Bonami and Stump () exploit rule blocks as a way of capturing affix ordering relationships. But in WG, the entirely declarative network cannot order rules, or apply blocks of rules. Affix ordering is effected by two different mechanisms. On the one hand, the relative ordering of affixes can be achieved by a concatenation of variants. For example, the Latin {docuerimus} is the pl-variant of {docueri} (Hudson : ), which is the i-variant of {docuer}, and in this way morphological structure is accretive. On the other hand, the relative ordering of clitics, such as in the case of the French pronominal clitics discussed in §., is achieved by exploiting the device of the Hostform. WG has no account of the inheritance of forms in words such as . The issue is that this word inherits its past tense forms from , so the past tense is {understood} just as the past tense of  is {stood}. In some models, this is treated as an issue of headedness (Bonami and Stump ), so, for example, it is hypothesized that  must be the head of the word  because that accounts for the facts about the realization of the past tense. In WG theorizing, the problem is that the base of  is {understand}, not {stand}, so the latter’s irregularity will not block the default. In §., we look at the relationship between morphology and other linguistic domains, such as sociolinguistics and language change.

. O 

.................................................................................................................................. There are several active areas of research within morphology which scholars working on WG have not engaged with and so they constitute various lacunae in the theorizing. There is no WG work on corpora, frequency data, or using experimental methods. Although Hudson has always taken sociolinguistic information seriously, and has integrated sociolinguistics together with aspects of the theory of language structure (Hudson : –), there is currently no work on the relationship of variation within languages or variation across languages and WG morphology. However, it is possible for it to model bilingualism, and also to model dialect variation as instantiated in a single speaker. Duran Eppler () presents a WG theory of code switching between German and English which requires the speakers to have an intra-mental model of both languages. However, this work focuses on syntactic code switching rather than facts from the morphology of either German or English. There is, however, a WG theory of language change. Gisborne () discusses the emergence of novel morphological exponents of the future tense in WG, and is largely concerned therefore with the relationship of clitics to affixation, and the relationship of special clitics to their syntactic phrasal analogues. The critical dimensions of this work rely on WG’s network structure, theory of inheritance, and the role of spreading activation in understanding language change in the individual. In areas away from morphology, aspects of this theory are already implemented in other work. For example, Traugott and Trousdale

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  



() offer a model of linguistic change which looks at the role of Default Inheritance hierarchies in understanding language change. The crucial point is that the WG theory of morphology is embedded in a theory of language which is itself embedded in a theory of cognition, or at least the role of language within cognition. From a WG perspective, a theory of change has to be accommodated within a larger theory of language and it has to be a theory of the possible changes within a language: an account of morphological change will instantiate that theory with respect to morphological data.

. E

.................................................................................................................................. One strength of the theory, which falls out of its Default Inheritance architecture, is that there is no need, from a WG perspective, to adopt Pinker’s () factorization of inflectional morphology where irregular morphology is in a different system from regular morphology. According to WG, irregular morphology simply overrides the defaults that regular morphology supplies. In the case of partial regularities among a group of verbs, that is stated as a subclass fact in the hierarchy, applying to the specific subclass of verbs rather than to individual members only, or to the whole class of verbs. We hope to develop the theory into other areas of currently active morphological theorizing. There is scope for typological work, and given the way in which psycholinguistic concerns underwrite many theoretical aspects of the model, collaboration with psycholinguists who are interested in storage would be very welcome, as would testing the theory against more linguistic data. We would also welcome the chance to test the theory against the models and predictions of colleagues in related and nearby fields. As can be seen from this brief discussion, plenty of work remains to be done. One area I would like to indicate is that the approach to the composition of morphological forms needs to be confronted with other WP approaches which have different strategies for establishing how a paradigm cell is filled (Blevins ). Such a confrontation needs to be contextualized against WG’s strongly made claims of psychological reality.

. F  Creider, Chet & Richard Hudson. . Inflectional morphology in Word Grammar. Lingua . –. Hudson, Richard. . English subject–verb agreement. English Language and Linguistics (). –. Hudson, Richard. . Language networks: Towards a new Word Grammar. Oxford: Oxford University Press. Hudson, Richard. . An introduction to Word Grammar. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

    ......................................................................................................................

 . 

M has never been a focus in Cognitive Grammar (CG), but neither has it been ignored. A wide range of morphological problems, issues, and phenomena have been addressed in general presentations of the framework and in its application to particular languages.1 The CG approach to morphology follows directly from certain fundamental principles: dynamicity; the usage-based approach; the functional basis of language; and structure residing in flexible assemblies.

. F

.................................................................................................................................. Linguistic structure is inherently dynamic, consisting in processing activity carried out simultaneously in different dimensions and on different time scales. Even a fixed, minimal structure (like a morpheme) is something that happens—a processing routine evoked and executed as a pre-packaged unit. This activity can be viewed at various levels: neural (e.g. activation and inhibition), psychological (the learning and manipulation of linguistic elements), and interactive (language use in context). CG is a usage-based approach (Langacker ; Barlow and Kemmer ). Linguistic units are abstracted from usage events—instances of use, in all their complexity and detail—through the reinforcement of recurring commonalities. They are necessarily selective and schematic (coarse-grained) relative to the varied expressions giving rise to them. A language comprises a vast array of units with different degrees of schematicity, entrenchment, and conventionality. Once acquired, units are activated in producing or understanding target expressions in subsequent usage events. Based on overlapping features, a given facet of the target tends to activate any number of potentially relevant units, which are mutually inhibitory and compete For the former, see Langacker (: chs –, : chs –, : §., : ch. , : ch. , a: ch. ). Examples of the latter are Tuggy (); Rubba (); Nesset (, , ); Nesset and Janda (). Cognitive Grammar is a specific descriptive framework within the broader movement of cognitive linguistics. 1

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



for the privilege of categorizing it; factors determining the winner include the extent of overlap, degree of entrenchment, and priming due to the context. The units thus selected constitute the expression’s structural interpretation with respect to the language and to some extent impose their organization in top-down fashion. Each facet of the target is recognized as either a full or partial instantiation of the categorizing unit (which, accordingly, is either fully or partially immanent in it). Instantiation corresponds to conventionality (or well-formedness). Should they recur, structures that are less than fully conventional are themselves subject to entrenchment and conventionalization, so that new units arise as extensions from older ones. Structures are characterized not only in ‘bottom-up’ fashion, in terms of constitutive elements, but also in ‘top-down’ fashion, in terms of the functions they serve (Harder ). These range from the global functions of language—symbolic expression and interaction—to the specific functions served by particular structures (e.g. a noun phrase serves the function of nominal reference). CG posits the minimum required for language to fulfill its global functions: only semantic structures (conceptions of any sort), phonological structures (including gestures), and various kinds of connections. The connection between a semantic and a phonological structure, so that either is able to activate the other, produces a symbolic structure. Based on partial overlap, the co-activation of component structures gives rise to a composite structure with emergent properties. Structures are also connected by relations of full or partial immanence. The activation of a schema thus facilitates, guides, or is at least inherent in the more elaborate processing activity of an instantiation. A language comprises an immense array of interconnected structures, some of which are activated in the processing of a given expression. These are dynamic, flexible assemblies exhibiting both serial and hierarchical organization, and where elements are grouped into cross-cutting structures based on different functions. There are no interfaces, since there are no discrete components: lexicon, syntax, semantics, etc. consist in overlapping facets of assemblies. As in other kinds of construction grammar (Goldberg ; Langacker ), lexicon and grammar form a continuum consisting in assemblies of symbolic structures, each of which pairs a form and a meaning—its semantic and phonological poles. So instead of interfacing with semantics, these symbolic assemblies incorporate it. Lexical and grammatical structures vary in specificity, complexity, and degree of entrenchment. Well-entrenched units include both specific expressions and partially or wholly schematic structures allowing the formation and apprehension of new expressions. Lexicon is best characterized as the unit expressions of a language, irrespective of complexity: morphemes, stems, words, fixed phrases, idioms, proverbs, even longer passages. Grammar comprises structures that are more schematic, hence immanent in both lexical items and novel expressions. Morphology and syntax are not delimited in any precise or consistent way by factors like productivity, regularity, and lexical status. If we want to distinguish them, we can do no better than their traditional characterization as pertaining to words vs. larger structures.

. M

.................................................................................................................................. Many of us have been introduced to linguistics, and found it effective to introduce others, through the analysis of words into component morphemes. To be sure, exercises devised for this purpose are based on carefully selected data and are often somewhat artificial, as the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 . 

fundamental ideas prove inadequate when confronting the actual behavior of words in the wild. But while the classical conception of a morpheme is untenable, the insight and utility of this notion—if properly formulated—are undeniable. Its characterization in CG rests on different conceptual foundations. The classical conception reflects an idealized model of linguistic structure based on the metaphor of building blocks (Langacker : §..): (i) Expressions are decomposable into discrete formal elements which are either meaningful or serve a grammatical function. (ii) The meaningful elements are invariant in form and have a single, well-delimited meaning. (iii) The elements of an expression are exhaustive of it, do not overlap, and are unaffected by the others. (iv) They combine in accordance with regular compositional patterns, so that the form and meaning of the whole are fully predictable. In terms of this idealized model, morphemes are simply the smallest components—minimal elements with both a form and a meaning (or function), no further decomposition being possible. It is widely recognized that this model is wrong in most if not all particulars. If it is to be retained as a useful descriptive construct, the morpheme notion needs to be reformulated along other lines. Morphemes are characterized in CG as conventionally established symbolic units, irreducible in the sense of not containing other symbolic units. Morphemic status is not absolute, because the factors involved—conventionality, entrenchment as units, and symbolic irreducibility—are matters of degree. Given that morphemes reside in aspects of neural processing, their graded nature is unproblematic and even expected. CG is not a processing model, nor is it based on any particular model. As a consistent working strategy, it does however attempt to explicate linguistic phenomena in terms of minimal, relatively uncontroversial assumptions regarding cognitive activity. If it proves linguistically adequate, an account of this sort offers a promising basis for investigation by independent empirical means.2 In this exploratory spirit, a unit is taken as residing in a complex pattern of neurological activity that unfolds through time and is replicable to the extent of being able to function consistently in broader patterns. It emerges in Hebbian fashion through the strengthening of synaptic connections. When sufficiently well entrenched, it is easily elicited, is executed automatically, and will run its course if nothing disrupts it. From the dynamic systems perspective, units are attractors—states of locally minimum energy toward which the system gravitates. Units are essential in coping with new experience. In language use, an array of units are activated in the apprehension of a target expression, thus providing its linguistic interpretation. Based on shared features, each unit is activated by a facet of the target (to the point of inhibiting alternatives) and structures it via its own execution. The unit thus captures or categorizes the target element, which is apprehended as an instantiation of it (Langacker b). We can speak of a target structure, and how it relates to established units, in regard to either production or comprehension.3 Target elements that in production are only vaguely formulated, or in comprehension are as yet uninterpreted, require a higher level 2

This represents a complementary strategy to the one pursued in the Neural Theory of Language (Feldman ) for bridging the gap between the neural and linguistic levels. 3 Up to a certain point, the description of language structure can thus be neutral with regard to production vs. comprehension. However, it is often convenient to phrase the discussion in terms of one or the other.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



(a) Emergence of Unit >

> (b) Overlap A

> (c) Inclusion

B

(d) Schematicity

A

A

B

 .. Emergence and overlap of units

of processing effort than those apprehended and structured by well-entrenched units. Their capture by such units is a matter of the system settling into a state of lower energy. We can describe this metaphorically (and to some extent experience it) as a release of the tension created by unassimilated elements. In almost any task, performance is optimal when the objective is fully and easily achieved—maximal results with minimal effort. What constitutes optimality in the task of formulating or interpreting a target expression? In the simplest case, an instance of optimal formulation is having a notion to convey and immediately coming up with just the right form to express it. Analogously, an instance of optimal interpretation is immediately recognizing a form and grasping its meaning. With either sort of capture, optimality involves several basic properties: the input readily activates a particular categorizing unit; the target fully instantiates the unit, which is executed without distortion; and the unit ‘covers’ the target sufficiently that further details can be ignored or easily dealt with. For a complex expression, involving multiple units and target elements, full optimality depends on that of its components. This notion of optimality is clearly reflected in the classical conception of morphemes and the idealized model supporting it. Their inadequacy does not invalidate the notion, but merely indicates that full optimality is canonical rather than general or even typical. In the CG view of morphology, the same fundamental principles account for both canonical and non-canonical structures. It is important to see just how. Convenient for this purpose are Venn-like diagrams, where a circle or ellipse represents a coherent pattern of processing activity. The emergence of a unit—an established pattern of activity—is depicted in Figure .(a). Through the reinforcement of recurring commonalities, patterns corresponding to structures of any kind, size, or level of specificity (granularity) are progressively entrenched as units. Entrenchment is a graded affair that correlates with ease and level of activation. This is one respect in which morphemic status is a matter of degree. Also shown in Figure . are notations for ways in which processing patterns overlap. With partial overlap, activation of either A or B will tend to elicit the other. If A is included in B, it will likewise tend to activate it; and if B should be elicited by other input, its execution subsumes the activity constituting A (which need not, however, be distinct or salient within it). A particular kind of inclusion is schematicity, where the processing activity constituting A, a coarse-grained structure, inheres in the more elaborate activity representing B, a structure of greater specificity. Overlap is responsible for a unit, A, being evoked to categorize a target, T. Figure .(a) depicts the optimal case of full immanence, where A occurs without distortion in the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .  (b)

(a) A

A'

A

A'

T

T

(c) A

> A

A'

Ti

> ... >

A

A'

Tj

>

A

A'

 .. Categorization and the emergence of variants

(b)

(a)

(c)

C A'

A + B A

C

B'

A'

B'

B

 .. Adaptation as an aspect of composition

apprehension of T. Commonly, however, the unit and target are non-congruent in some respect, as indicated in Figure .(b). In that case T is captured and structured, not by A as such, but by an adapted version of it, represented as A'. Though non-canonical, categorization of this sort is natural and often efficacious. Engagement with a target always has some impact on a unit, if only by reinforcing it. Recurrent engagement with targets of a certain sort has more substantial consequences. As shown in Figure .(c), an adapted version of a unit can itself undergo entrenchment, resulting in the co-existence of A and A' as overlapping units. Analogously, recurrent engagement with congruent but more specific targets leads to the emergence of a more specific unit, so that A and A' co-exist as schema and instantiation. These developments are an automatic and ubiquitous consequence of language use. So in contrast to the classical conception, morphemes typically have multiple variants, representing complex categories at both the semantic and the phonological pole (Lakoff ; Langacker : ch. ; Taylor ). Units adapt not only to a target but also to co-occurring units. The components of thigh bone, for example, are construed a bit differently than in soup bone or thigh-length [skirt], as each element highlights certain aspects of the other’s encyclopedic meaning.4 And unlike building blocks, elements combine by virtue of overlap (e.g. both thigh and bone involve the conception of a body and its parts). So instead of being separate and discrete, as in Figure .(a), the components of a complex expression are related in the manner of Figure .(b), where A' and B' are simply the versions of A and B induced by their combination. Furthermore, a complex form is not just the sum of its components. The composite whole incorporates features that are not contributed by the individual elements or that even conflict with them. Just from thigh and bone, for example, one could not predict that thigh bone designates the bone (not the thigh), or that the composite form is thigh bone rather 4 In another departure from the classical conception, where a lexeme’s meaning is bounded and selfcontained, it is viewed in CG as providing flexible access to relevant domains of general knowledge (Haiman a; Langacker : §.; cf. Wierzbicka ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



than bone thigh. Nor could it even be predicted that the bone is identified as the one internal to the thigh, and not (say) a bone worn on the thigh for decoration. So as shown in Figure .(b), the composite whole (C) is a structure in its own right, not reducible to even its adapted components.5 Like any other structure, C can itself achieve the status of a unit, as in Figure .(c); thigh bone is a complex lexeme of this sort. And of course, such a unit is also subject to adaptation (e.g. thigh bone being used to designate the tattoo image of one), in which case its components have even less in common with the original units A and B. While decomposing an expression AB into A + B is an essential analytical step, it is not the whole story, since adaptation (yielding A' and B') and emergent properties (yielding C) figure to some extent even in basically regular cases. Further complicating the picture is analyzability, which is generally neglected because it has no place in the classical conception of morphemic analysis. Whereas building blocks are all-or-none (a structure is either composed of blocks or it is not), the presence of component morphemes would seem to be a matter of degree. For example, print is more evident in printer than compute is in computer, and much more so than propel in propeller.6 It would be simplistic to say tout court either that propel is part of propeller or that it has no presence at all. Nor is propel completely unanalyzable (cf. proceed, proclaim, repel, expel). In processing terms, a unit is part of a larger structure to the extent that it has a causal role in the activation and execution of the latter. A unit is like an event waiting to happen: it is easily elicited, and once initiated it runs its course unless disrupted by conflicting input. So as seen in Figure .(a), where shading indicates activation, activating certain facets of a unit will tend to unleash its full execution. (a) >

>

(b) C A'

C

C B'

A

A'

> B

B'

A

A'

>

B'

A

B

B

(c) C A'

C

C B'

>

A'

B'

>

A'

B'

 .. Activation and analyzability

5

The grammatical construction evoked in their combination supplies some but not all of the added specifications. 6 One indication of this is their potential to be invoked anaphorically by do so. Hence this decline in acceptability: That {?printer / ?*computer / *propeller} does so quite efficiently.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 . 

Figure .(b) represents the combination of units A and B to form a novel expression. On the basis of overlapping features, each unit is activated to cover some portion of the target. But since each occurs in the context of the other, as well as the target, the structures actually manifested are their overlapping adaptations A' and B'. These, together with other input, give rise to the full composite structure C. Because C is not a unit, and is only arrived at via the (adapted) activation of A and B, the latter have a causal role in C’s occurrence. Thus a novel combination (e.g. pig meat) is fully analyzable almost by definition. At the other extreme, a comparable but wholly unanalyzable unit (such as pork) is activated directly, in the manner of Figure .(a). Between these extremes are fixed expressions that are analyzable to some degree. A unit expression remains fully analyzable if A and B are still causally involved in its occurrence, in the manner of Figure .(b), that is, if A' and B' are still apprehended as adapted realizations of them. But if C is well entrenched, its components are as well, so that A' and B' are no longer just context-induced adaptations of A and B but have some cognitive status in their own right. As units, C and its components are themselves events waiting to happen with the potential to be activated directly.7 C is still highly analyzable when, as shown in Figure .(c), A' and B' still have a causal role: being elicited by the input, they contribute to the activation of the whole (i.e. they jointly capture C). But since C is also a unit, this intermediate step is no longer strictly necessary and over time will tend to disappear, the components being activated less frequently and to a lesser extent. As their activation diminishes, the processing that constitutes A' and B' may still occur as part of C, but as an implicit result of C’s activation instead of having a causal role in it. Figure .(a) sketches the typical outcome: a gradual loss of analyzability, whereby A' and B'—and a fortiori, the association with A and B—become less and less accessible within the whole. Among the factors abetting this development are the more efficient, ‘streamlined’ processing that comes with rehearsal, as well as adjustments to C that obscure or obviate the role of the components (e.g. the devaluation of awesome to mean ‘OK’). An expression with zero analyzability is a morpheme. But because so many expressions merely approximate this ideal, morphemic status is best regarded as a matter of degree. These notions allow the ready description of other problematic phenomena. Importantly, it is not required that the components of a complex unit be active or accessible to the same (a)

C A'

C >

B'

(b)

>

(c)

C A'

C' B'

A'

B'

A''

(d)

C A'

C' B''

B'

>

C B'

 .. Degrees and loss of analyzability

7

The direction of causal influence may then be reversed: instead of A' and B' being apprehended as instances of A and B, they tend to activate A and B secondarily due to overlap.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



degree. Their role is equally evident in words like eatable, breakable, and washable, which approximate full analyzability, as in Figure .(c). But in durable, pliable, and visible, the stem is less evident than the suffix, as suggested in Figure .(b). To some extent the stem still contributes to the expression’s form and meaning, being reinforced by such lexemes as duration, pliers, and vision. Lacking such support, words like edible, malleable, and risible are less analyzable overall, as in Figure .(c), to the point where positing a stem at all is questionable. The structure in Figure .(d), where only one component is recognized within a larger whole, is unproblematic in this approach. In this way CG resolves the well-known problem of residues, for example the part left over when day is segmented out of Monday, Tuesday, Wednesday, etc., or when the in of inept is analyzed as the negative prefix. Such expressions are non-canonical, but they commonly do arise (through the vicissitudes of usage and language change) and are not unstable. They approach optimality in that a single unit is able to cover a semantically complex target. While they do so with less than full efficiency, their single component has the compensating virtue of reinforcing a crucial facet of their meaning. But to the extent that this is apprehended, it introduces a measure of non-optimality in the form of the uninterpreted residue. Also contributing to this tension is a construction that the expression could instantiate were it fully analyzable. Due to their overlap, inept tends to activate the schema instantiated by negative adjectives like inexact, incapable, ineligible, impossible, etc. The attractive force of this well-established unit encourages the emergence of a morpheme, ept, that resolves the tension by making the expression analyzable. Originally confined to inept, it may eventually be used independently.8 The configuration in Figure .(d) arises not only through loss of analyzability but also by reinforcement of form–meaning associations that happen to occur in a substantial number of larger units. This is the case with phonaesthemes, for example the fl of words like flit, flicker, fly, flutter, flip, fling, flux, flurry, flimsy, etc., which cluster around a notion of rapid, insubstantial movement. Though commonly ignored because they do not fit the classical conception, phonaesthemes arise in the same basic manner as other morphemes: through the reinforcement of recurring commonalities. Despite their lesser degree of entrenchment, salience, and semantic coherence, they enhance the meaning of the containing expressions and even facilitate new coinages. To be sure, not every instance of fl is assimilated to this pattern (e.g. flannel is simply flannel), but the same is true of every morpheme (e.g. the ant of plant is not identified with the insect name). A final departure from the classical conception are expressions that are analyzable at only one pole. For instance, went is semantically analyzable but morphologically opaque, while understand is just the opposite. Using upper- and lowercase letters for semantic and phonological structures, a canonical expression—analyzable at both poles—is depicted in Figure .(a): at each pole the components are invoked and adapted to form the composite whole (C or c). By contrast, went and understand are as shown in Figures .(b) and .(c). Went comprises the semantic components GO and PAST, but phonologically only the composite whole (c) has a symbolizing function.9 Conversely, understand invokes the

8

I have in fact used it in a jocular vein. To the extent that the final t is identified as the past tense morpheme, its phonological pole is instead like Figure .(d). 9

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .  (a)

(b)

C A

A'

a

a'

(c)

C

B'

B

A

b'

b

a

B'

A'

B

C

A

B

c

c

c

b

a

a'

b'

b

 .. Analyzability at both poles or only one

phonological poles of under and stand (cf. stood and understood), but their semantic poles make no apparent contribution to its meaning (C). This opacity reflects the general phenomenon known as blocking (or preemption). In processing terms, it is a consequence of structures that compete to fulfill a certain function being mutually inhibitory. When one is highly entrenched relative to the others, it is consistently able to suppress them and win the competition for activation. The composite form went is sufficiently well entrenched as the past tense of go that it preempts the otherwise expected goed. Likewise, the composite meaning of understand blocks the emergence of any meaning based on UNDER and STAND.

. M 

.................................................................................................................................. Characterizations in terms of processing activity are not an alternative to describing language structure but are aimed at elucidating its basic nature. Thus Figure .(a) is meant to explicate the actual import of saying that an expression AB is analyzable into components A and B. It is, however, both standard and more convenient to think in terms of structures rather than patterns of activity, so representations like Figure . are commonly used in CG. Their utility for analytical and descriptive purposes comes at the cost of ignoring adaptations, overlap, and emergent properties of the whole—limitations that must always be kept in mind. In CG, grammatical analysis depends on semantic and phonological characterizations of the elements involved. An initial step is thus to spell out, in as much detail as possible (or at

ab c

a

i s

a

c

c s ab i

b s

c

c = composition i = integration s = symbolization

b

 .. Composition viewed in terms of structures

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    (a)



(b)

… thígh bÒne t thígh t

bÓne

w t

… ´

w'

… `

w'

t ...

w t

...

w

w

t

 .. Composition in accordance with a constructional schema

least as necessary for a given purpose), the internal structure of component elements as well as how they relate to each other and to the composite whole. Figure .(a) shows the minimum needed for a grammatical description of the compound thigh bone. At the semantic pole, heavy lines represent an expression’s profile, that is, its conceptual referent. Thigh profiles a subpart of a leg, while bone designates the rigid inner portion of a body part specified only schematically ( . . . ). Dotted lines indicate conceptual overlap by showing the correspondence of particular substructures. Here the schematic body part evoked by bone is identified as a thigh. The composite expression inherits its profile from bone: the referent of thigh bone is the bone (not the thigh). At the phonological pole, thigh and bone are words (w) specified as appearing in that sequence along the temporal axis (t). They combine to form a higher-level word-like structure (w') with a single primary stress, which falls on the initial element. Figure .(b) sketches the constructional schema instantiated by thigh bone and countless other expressions on the same pattern. Emerging via the reinforcement of their commonality, the schema is immanent in such expressions and accessible for the apprehension of new ones. It specifies certain features not predictable from the components: the nature of their overlap, inheritance of the profile from the second element, and primary stress on the first element. In the relation between component and composite structures, thigh bone comes reasonably close to the ideal case of full analyzability and full optimality: thigh and bone occur independently; they are clearly recognized in this expression; they are manifested with little distortion; and emergent properties are essentially limited to those specified by the schema. When they occur alone, thigh and bone are full words with primary stress and a certain canonical duration. Their adaptation as part of the compound—resulting from temporal compression into a wordsized structure with only one primary stress—leaves both elements clearly discernible within the whole. And despite the imposition of a single profile, their meanings are as well. The hallmark of compounds, the distinctive property of their prototype, is the juxtaposition of roughly co-equal elements that normally occur as independent words. More usual is for the components of complex words to exhibit the asymmetry reflected in the labels stem and affix. These can be characterized in terms of fundamental notions of CG (Tuggy ). Key factors are autonomy/dependence and the abstraction of units from usage events.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 . 

One structure, D, is dependent on a more autonomous structure, A, to the extent that D requires A for its manifestation; hence D makes schematic reference to A as part of its own characterization. For example, the conception of a relationship requires the conception of its participants, and thus invokes them at least schematically. Also, while vowels can be realized autonomously, consonants require their support to be fully manifested.10 At issue here is the A/D-alignment of symbolic elements in complex expressions, where a stem is autonomous and an affix dependent. Quickly can thus be analyzed as (quick)A plus (( . . . ) ly)D, the latter making schematic reference ( . . . ) to a stem. Their integration yields a higher-level autonomous structure: ((quick)A ly)A'. Well-known factors influence an element’s conventionalization as a stem or an affix. Certain kinds of notions, such as physical objects and events, are normally coded by stems, whereas affixal meanings are usually more abstract or schematic. And in general, stems tend to be of greater semantic and phonological heft than affixes. Some elements are too impoverished semantically (e.g. a case marker), or too dependent phonologically (e.g. a suprasegmental morpheme), to stand alone as a stem. These factors are less than decisive, however, for over a considerable range of cases the same meaning or the same segment sequence can function in either capacity.11 Ultimately, an element’s status as stem or affix is shaped by usage, reflecting the variety of contexts it appears in. As units emerge by the reinforcement of recurring commonalities, they incorporate any consistently occurring facets of the usage events giving rise to them; these include not only their intrinsic content, but also recurrent features of the context. If an element regularly occurs in a particular kind of structure, a schematized representation of that structure is thus incorporated as part of its overall characterization (Langacker , b). Because it only occurs in words like quickly, sadly, possibly, etc., the morpheme ‑ly makes schematic reference to an adjectival stem and the construction effecting their integration: (( . . . )ADJ ly)ADV. These are actually part of the morpheme in the sense that ‑ly is only activated in the context of this structural frame.12 By contrast, forms like thigh, bone, and quick are well entrenched as independent words. Their occurrence in larger structural frames may well achieve the status of units which they tend to activate, but none is so well entrenched that it is necessarily activated and precludes their independent use. The difference between a compound and a stem + affix combination is thus as sketched in Figure ., where large boxes represent autonomous structures that can occur independently. In the compound thigh bone both components are autonomous, as is the composite whole. But in bones the plural morpheme is dependent, evoking a schematic stem which bone instantiates. Whereas bone is self-contained, ‑s is manifested only in the context of the full expression.

10

These are matters of degree, often with dependence in both directions, but there is usually an overall asymmetry (Langacker : §., : §..). Note that this characterization of autonomy is non-distributional: it is not required that an autonomous element actually occur alone. For example, in a language where the minimal syllable is CV, a vowel does not stand alone but is still a vowel and autonomous relative to a consonant. 11 E.g. English want is comparable semantically to the derivational suffix -viču of Luiseño (a UtoAztecan language), which certainly has the phonological heft to be a stem. 12 This is equivalent to describing -ly as a constructional schema one of whose component structures is (relatively) specific.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    (a)



(b)

BONE PL

THIGH BONE THIGH

BONE

bones

thigh bone thigh

... PL

BONE

bone

bone

... s

 .. Compounding vs. affixation

The flip side of D requiring the support of A for its manifestation is that D is to some extent—and sometimes exclusively—manifested through its effect on A. As it must in order to be autonomous and play its supporting role, a stem incorporates substantial semantic and phonological content. This may be augmented by an affix, but the added content is usually quite limited. Phonologically, for example, bone consists in a complex syllable which the plural augments with just a consonant. At the semantic pole, bone designates a very specific type of object, and while plural ‑s adds to this the conception of a multiplex mass, the entities comprising this mass are wholly unspecified. Now the augmentation of content is itself a kind of adjustment to the stem. However, not every adjustment involves additional content. A stem may, for instance, be affected by an internal shift in prominence, like the placement of primary stress or the choice of profile. At the extreme, a morphological element adds no content at all to the stem, consisting solely in such adjustments. This is possible at either pole or at both. An instance of the latter is the derivational element relating verbs like expórt, protést, and conflíct to the corresponding nouns: éxport, prótest, cónflict. Semantically it shifts the profile from an event to an associated thing. At the phonological pole, primary stress is shifted to the first syllable. Cases like these are awkward using any representation based on the building-block metaphor, including the format in Figures .–.. For example, what should be identified as the plural morpheme at the phonological pole of men? It exemplifies a process morpheme, consisting not in the vowel e per se, but rather in the modification of a to e (ablaut). It might thus be shown as in Figure .(a), depicting the modification as an abstract building block (Langacker : ). A more perspicuous notation, one that arguably reflects the dynamic nature of these structures, takes the autonomous element (A) as a starting point, an initial structure whose canonical implementation is adapted or overridden by dependent operations (–––>) producing the actual composite form (A'): (A) –––> (A'). The notations and examples in Figure .(b)–(e) illustrate the spectrum of possibilities. A compound comprises two autonomous structures, and while they adapt to one another, their main contribution is substantive. Plural ‑s is dependent and less substantial, but it resembles a building block in that it adds a segment to the stem. In cases like men, the dependent element has no phonological substance of its own, consisting instead in the discrepancy between the stem and the composite whole.13 Finally, the plural of sheep

13

There are of course mixed cases, e.g. children.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .  (a)

man

(b)

men

((A1) (A2))A'

…a…

(c)

…e…

((thigh)A1 (bone)A2)A' (e)

(d) (bone)A

(bones)A'

(man)A

(men)A'

(sheep)A

= ((bone)A s)A'

(sheep)A'

= ((sheep)A)A'

 .. Morphological autonomy and dependence

represents the degenerate case where the operation relating the stem to the composite whole consists in doing nothing at all. From a dynamic perspective we can speak of a plural morpheme in all three cases; they simply vary in the extent to which its phonological realization is independently observable. Starting from an autonomous symbolic element (a stem), any number of dependent elements can successively apply, each deriving a higher order autonomous structure: ((((A) D)A D)A D)A . . . Their modification of a stem can be of any nature (prefixal, suffixal, suprasegmental, processual, degenerate). Any portion of a complex word can undergo entrenchment, resulting in symbolic units that vary in analyzability and the extent to which their form and meaning are compositional. The initial stem (A) is a root, but since it implies non-analyzability, root status is a matter of degree; depending on the threshold adopted, the root of propellers can be identified as propeller, propel, or just pel. Though typical of morphological structure, this layered organization is not the only possibility. Compounds are a prevalent alternative. Another well-known exception is the Semitic pattern based on the interdigitation of consonantal stems with vocalic expressions of derivation and inflection. This is unproblematic in CG, as the abstraction of units always involves schematization and nothing confines it to a single locus. For example, Aramaic palax ‘worker’ consists of p . . . l . . . x . . . ‘work’ plus the agentive morpheme . . . a . . . a . . . , the former representing the reinforced commonality of palax, plaxa (infinitival), palxa (third feminine singular jussive), etc., and the latter abstracted from forms like palax, dagal ‘liar’, and daras ‘student’ (Rubba ; Langacker : ch. ). At the phonological pole they are mutually dependent and mesh to form the composite expression.14 Such cases underscore the point that morphological elements are not limited to segment sequences, nor is their integration just a matter of juxtaposition (as with building blocks). It is important in this regard not to confuse two dimensions of phonological organization, referred to in CG as unipolar and bipolar. Unipolar structures are those established on purely phonological grounds: the organization of segments into syllables, syllables into words, words into phonological phrases, etc. By contrast, bipolar elements are delimited by their symbolizing function, constituting the phonological poles of symbolic structures. While unipolar and bipolar structures often coincide (making for processing efficiency), they have different rationales and thus are frequently cross-cutting. In unipolar terms, for

14

The stem is characterized as such on the basis of semantic heft and conceptual autonomy.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



instance, chipmunks decomposes into the syllables chip and munks, whereas in bipolar terms it comprises the morphemes chipmunk and ‑s. Being more abstract in nature, bipolar elements may not be directly discernible in the phonological substance of unipolar structure (e.g. the symbolization of plurality is not evident in men alone). And when it does have segmental content, a dependent morpheme is not simply juxtaposed to a stem as a separate phonological element. Instead they adapt to one another, and do so in accordance with the constraints of unipolar structure. In purely phonological terms, the result of morphologically integrating bone and ‑s is not an expression comprising separate chunks, but a single syllable incorporating ‑s as the final consonant in the coda. If the stem should be polysyllabic, a suffixal consonant is naturally incorporated in the coda of the final syllable; in chipmunks, ‑s combines morphologically with the stem as a whole, even though phonologically it is part of munks in particular. The same holds for the pluralization of a compound, for example scarecrows. There might appear to be a mismatch between meaning and grammar: semantically pluralization applies to scarecrow as a whole, but morphologically it applies to crow. But this is really just a matter of unipolar and bipolar phonological structure being different in nature.

. U

.................................................................................................................................. CG aims at a unified account of the various aspects of language structure. Instead of positing separate components (phonology, morphology, syntax, lexicon, semantics), thereby raising the spurious question of how they interface, it recognizes only semantic structures, phonological structures, and their association as symbolic structures—the minimum needed for language to serve its symbolizing function. Moreover, semantic, phonological, and symbolic structures are described in analogous fashion in terms of the same basic factors, including specificity, entrenchment, overlap, prominence, categorization, adaptation, and autonomy/dependence. Instead of being separate and fundamentally different, lexicon, morphology, and syntax form a continuum fully describable as assemblies of symbolic structures. As such they are not distinct from semantics and phonology, which are better seen as inhering in them and constituting their two poles. Patterns of semantic composition, for example, are simply the semantic poles of constructional schemas (like Figure .(b)). Language itself offers no clear-cut basis for apportioning symbolic structures among a morphological, a syntactic, and a lexical component. If we characterize morphology and syntax as pertaining to words vs. larger expressions, the distinction does not consistently correlate with other factors, such as productivity or regularity. Within a word, there is likewise no clear-cut boundary between derivation and inflection (Langacker : –). And if the lexicon comprises the fixed expressions of a language (conventional units), it includes both derived and inflected words as well as larger structures.15

15 The factors involved are all matters of degree. In terms of size, for instance, compounds are intermediate between words and larger expressions. In terms of specificity, lexemes form a gradation with grammatical patterns (Langacker : , : §..; Goldberg : ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 . 

(a)

(b)

[CαCα]

…CO CO …CV

[st] east

[ft] craft

if s

VCOCo

[CVCV]

[COCO]

[ps] … [sk] risk lapse

[vz] hives

CO

…s

iv

[bd] [vd] … [gz] ebbed raved lags

VC

(c) BEAT PL

BEAD PL BEAD

… PL

BEAT

… z

bit

bidz bid

… PL bits …CO s

… S

(d)

(e) LEAF PL LEAF

LEAF

lif

liv

THING PL … PL

THING

THING

… z

…f

…v

livz

… PL …vz … z

 .. Unified approach to phonological patterns

A matter requiring further discussion is the non-distinctness of phonology from lexicon and grammar. In the usage-based perspective of CG, their non-distinctness follows from the symbolic nature of lexical and grammatical structures, the abstraction of units from usage events, and the immanence of schemas in their instantiations. These offer a unified account of phenomena that are often treated separately. Among the purely phonological units of a language are schemas representing specific segments (phonemes), segment types (natural classes), and permitted combinations (phonotactic patterns) that vary in size and level of specificity. For succinct illustration, let us limit attention to obstruent clusters appearing as codas in English syllables. Figure .(a) gives a few of the units representing recurring combinations: [st], [bd], etc.16 Also shown are schemas for patterns at higher levels of abstraction, where Co is a voiceless obstruent, Cv a voiced one, and CαCα indicates that the consonants agree in voicing. In CG, these phonotactic rules are not distinct from expressions but inhere in them, arising as their reinforced commonality. [st] is thus immanent in east, best, twist, etc., as are the higherlevel patterns CoCo and CαCα. Other phonological patterns are inherent in symbolic assemblies rather than single elements. Consider a rule that enforces the patterns in Figure .(a) by devoicing an obstruent to agree with one that follows: Cv –––> Co / __ Co. For sake of comparison, let us 16

Codas being dependent, these actually make schematic reference to the syllabic nucleus, e.g. [Vst].

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



ascribe this rule to an imagined version of English where it applies with full consistency and the plural is always marked by the suffix [s]. Hence the respective plurals of reef [rif ] and eve [iv] are [rifs] and [ifs]. Figure .(b) shows the formation of the latter. [iv] is the usual form of eve—the one that appears except in voiceless contexts. When it does appear in such a context, one case being the plural, it is realized as [if] through adaptation to the following obstruent. A dashed arrow represents this discrepancy between the canonical variant [iv] and its actual manifestation. Also indicated in .(b) are various schemas instantiated by this assembly. Immanent in [iv] and [ifs] are the syllabic templates [VC] and [VCoCo]. The assembly as a whole instantiates the schema at the left, representing a general pattern whereby a voiced obstruent is manifested as its voiceless counterpart when it finds itself in a voiceless context. It is learned through usage from countless expressions displaying this discrepancy, and once established as a unit it can figure in the apprehension of new expressions. By hypothesis, it is fully general in regard to obstruents (which are characterized schematically) and well enough entrenched to be activated for the capture of any CvCo combination. And though induced by pluralization, it makes no essential reference to any particular morphological context. This schematized pattern of adaptation therefore constitutes what is usually described as a general, productive phonological rule. The difference between a purely phonological rule and a morphophonemic rule is that the latter is partially characterized in bipolar terms. Like the imagined English of Figure .(b), real English exhibits a pattern of obstruent devoicing ensuring conformity to the phonotactic pattern CαCα for syllable codas. But instead of being productive and purely phonological, it inheres in the relationship between established variants of particular morphological elements, notably the regular plural, possessive, and tense inflections. It suffices to consider the basic variants of the plural, exemplified in Figure .(c): normally [z] (as in beads and toes), it is instead realized as [s] after Co (beats). The dashed arrow indicates that [z] is prototypical, [s] being an extension induced by the voiceless context. With such well-established forms, however, the extent to which this asymmetry still obtains is uncertain and unimportant; as a well-entrenched unit, plural [s] is directly accessible— [z] need not have a causal role in its activation. These variants are of course susceptible to schematization. Both instantiate the abstract segment [S] (an archiphoneme, if you like), that is, an alveolar fricative neutral in regard to voicing. Also conforming to the phonotactic pattern is the plural noun leaves. But here, as shown in Figure .(d), the stem has variant forms: normally [lif], it appears as [liv] only in this construction. The morphological context is quite specific; [liv] is limited to the plural, [lif ] occurring in both the possessive (the leaf ’s color) and in the present tense if we use the lexeme as a verb (It leafs in the spring). A further morphological limitation is that only certain nouns participate in this pattern, among them wives, thieves, knives, shelves, loaves, lives, and elves (cf. reefs, fifes, oafs, staffs, whiffs, beliefs). It is nonetheless a pattern, a minor regularity of English described by the schema in Figure .(e).17 It represents the reinforced commonality of the alternations leaf/leaves, wife/wives, etc. Minor patterns of this sort raise a number of theoretical issues (Lakoff ). If the schema in Figure .(e) is indeed abstracted as a unit, why is it not employed in 17 It is claimed in CG that basic grammatical categories, such as noun and verb, are susceptible to conceptual characterization (Langacker : ch. , : ch.). A noun is said to profile a thing (abstractly defined).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 . 

apprehending new expressions? How do we know that the plural of smurf is smurfs, and not *smurves? The answer lies in the competition of relevant units for the privilege of categorizing a target element. Key factors include the degree of overlap with the target and the relative entrenchment (ease of activation) of competing units. Cases like the English plural, with a productive major pattern and an array of non-productive minor ones, are thus a matter of the former being so deeply entrenched (relative to the others) that in dealing with novel expressions it virtually always wins the competition.18 It functions as a strong attractor, and by capturing a target it blocks the alternatives. Why, then, do we say leaves (reflecting a minor pattern) instead of *leafs (in accordance with the major pattern)? The answer, of course, is that leaves is itself a well-entrenched unit, hence readily elicited in constructing or understanding expressions. Moreover, it is highly specific compared to the schemas representing general patterns, and its finer-grained detail offers many more points of overlap with a target. So given that the target being formulated is the plural of leaf, its extensive overlap with the unit leaves ensures that the latter will be activated to express this notion, thereby blocking the otherwise expected leafs.19 In the usage-based approach, schemas are abstracted from occurring expressions, and expressions learned as units coexist with any schemas they instantiate. Schemas representing productive patterns are accessible for the categorization of novel expressions (e.g. smurfs), but are also immanent in instantiations which have been entrenched as units (e.g. toes, birds, peanuts). For non-productive patterns, only the latter obtains—the instantiating expressions must themselves be units, or else the major pattern would prevail.20 There is no need to identify the participating lexemes with diacritics, or as members of a special morphological class. Instead, the proper distribution is given directly by the array of unit instantiations: the fact that leaf, wife, thief, etc. pluralize by the pattern in Figure .(e) is simply a matter of leaves, wives, thieves, etc. being well-established units. These plurals form a category just by virtue of the schema being immanent in them. Expressions like leaves are thus retrieved from memory, whereas novel forms such as smurfs are assembled in accordance with a schema (the CG equivalent of a rule). However, these alternate modes of processing do not imply separate processing systems, as proposed by Pinker and Prince (), for they can both be implemented in a model of the sort presented here, involving overlapping patterns of activation, degrees of entrenchment, and competition for activation in the categorization of target expressions.21 A single-system account is therefore adopted both for the sake of unification and also because the distinction between productive and non-productive morphological patterns is anything but absolute (Plunkett and Juola ; Simonsen ). In no small measure the unification achieved in CG resides in the pervasive role of categorization, characterized as the activation of one structure—generally a schematic 18

Its entrenchment is due to type frequency, i.e. the proportion of eligible forms it applies to (Bybee and Hopper ; Langacker c). 19 Indeed, if leafs occurs it is interpreted as a distorted realization of leaves, hence ill-formed (nonconventional). 20 Hence participation in minor patterns tends to correlate with frequency. Infrequent forms are more susceptible to regularization. 21 The difference was sketched in Figure .: diagram (c) represents the activation of a wellentrenched unit, and (b) the formation of a novel expression (one of the activated units—not shown—being a constructional schema specifying the integration of components).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



unit—in the apprehension of another in which it is fully or partially immanent.22 Categorization can be based on either a single factor (e.g. the class of obstruents in a language) or multiple factors (e.g. the class of voiceless obstruents). The properties in question can be either intrinsic or extrinsic to the target. For example, leaf, wife, thief, reef, oaf, belief, etc. can all be categorized intrinsically as nouns that end in [f ]. By contrast, the class comprising leaf, wife, thief, etc. (but excluding reef, oaf, belief, . . . ) is based on the extrinsic property of participating in the plural construction in Figure .(e). The example also shows that the basis for categorization may not be observable in a single expression, but only over multiple expressions. The distinctive property of leaf, wife, thief, etc. is only evident from a comparison of the singular and plural forms. These factors allow the emergence of categories based solely on parallel participation in multiple constructions, such as gender or conjugation classes.23 Masculine nouns in Spanish, for example, occur with the definite article el (el techo ‘the roof ’), the indefinite article un, adjectives ending in o (un techo rojo ‘a red roof ’), and so on (Langacker : §., : §..). Structural frames representing these constructions are part of the characterization of individual nouns, and collectively they define a category based on multiple grammatical behaviors in the same way that the schema in Figure .(e) does so with respect to a single behavior. And as with individual schemas, a complex frame can be productive, sanctioning new expressions that result in additional category members. A morphological paradigm, such as one comprising all the inflected forms of a verb stem, is a complex frame with a substantial degree of systematicity: the component schemas represent multiple dimensions (e.g. person, number, tense), each with multiple values, ideally allowing any combination (Langacker : §..). Each form reflects the stem’s occurrence in a structural frame, comprising one or more component schemas, that specifies a particular value for each dimension. It can either be learned as a unit or assembled following the frame’s specifications. As in general, paradigmatic forms exhibit varying degrees of regularity and analyzability. It may be that each value is consistently symbolized by a distinct element (e.g. Spanish perr-o-s ‘dog + masculine + plural’); the frame then consists in multiple component schemas representing productive patterns of morphological composition. At the opposite extreme, the values are coded syncretistically as the complex meaning of a single morphological element (e.g. Spanish am‑é ‘love + first:singular:past:indicative’). It often happens, of course, that finer distinctions are made in certain regions of a paradigm than in others; for example, person and number distinctions consistently made in the present tense might be neutralized in the past. This is a matter of the defining frames being more or less specific: a set of frames conflating present tense with particular values for person and number vs. a single past-tense frame schematic in regard to person and number. An additional factor is the overriding of paradigmatic regularities by lexical idiosyncrasies, as in the case of English be. Owing to specificity and entrenchment, a word like is completely blocks the regular form (*bes). 22 A schema is simply the reinforced commonality of instances. By this definition, there is no essential difference between the schema and the exemplar models of categorization (Langacker , ). 23 To varying extents these correlate with intrinsic semantic or phonological properties, which often define a prototype. The point, though, is that even classes of arbitrary membership can in principle be accommodated.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 . 

. C

.................................................................................................................................. Space has permitted only brief presentations of certain claims and notions of CG which have been described, exemplified, and justified more fully in other venues. In the case of morphology, their application is plainly in need of further elucidation, illustration, and substantiation. This will have to be grounded in extensive, detailed descriptions of the morphology of diverse languages. It will further require a more comprehensive characterization of phonology in CG terms than has thus far been available (e.g. Rubba ; Taylor ; Tuggy ; Nathan ; Langacker , ). The discussion has nevertheless indicated how the framework approaches a broad range of morphological phenomena. None seem inherently problematic, and certain classic problems (e.g. residues, interfaces) are either resolved or never arise. Fundamental principles of CG allow the seamless integration of morphology in a unified account of language structure.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

  ......................................................................................................................

    

. I

.................................................................................................................................. C Morphology (CxM) is a theory of morphology developed by Geert Booij at the beginning of this century (Booij a, b, , , a, b, c, a, b, a, b, c, a, b, c, d, a, b, c, , , , a, b, ). Its programmatic name, established since Booij (a), places it in the general framework of Construction Grammar (CxG) (cf., among others, Fillmore, Kay, and O’Connor ; Goldberg , ; Hoffmann and Trousdale ),1 where it caters to the domain of morphology. To date, most of the work concerns word formation, although recent publications (e.g. Booij ) show that the framework can be readily extended to inflection, yielding a Word-and-Paradigm type of model (Blevins, Ackerman, and Malouf, Chapter  this volume). The close ties between CxM and Construction Grammar are particularly useful for modelling phenomena that straddle the boundary between syntax and morphology, and more generally between grammar and the lexicon, as we will see in this chapter. The model rests on three central notions: the , the , and a lexical network sometimes referred to as the . C in CxM (and CxG) are conventionalized pairings of form and meaning. Constructions may vary in complexity and specificity. In CxM the minimal construction is the word; hence, CxM is a word-based model of morphology.2 However, lexical

1 As is well known, CxG is a family of theories rather than a single theory (see Östman and Fried ; Hoffmann and Trousdale ), which is why people often speak of construction grammars. As we will see, CxM shares some mechanisms with some versions of the theory, but not with others. 2 Goldberg (: ) presents ‘morphemes’ as the smallest construction in her lexicon–syntax continuum. Later publications (Goldberg ) are more in line with CxM and regard the word as the minimal construction.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

constructions larger than words, like idioms and other multi-word expressions (MWEs), are also part of the picture. Constructions can be partly or wholly abstract, which means that they contain variables. Such templates or  motivate the structure of existing words and enable the coinage of new ones. Hence, they function like rules in other theories of grammar (see §.. for differences between schemas and rules). Constructions are hierarchically organized, such that fully specified constructions, that is, words and multi-word units, are daughters of more schematic constructions, that is, schemas. The mental repository of all constructions in a language is the , which constitutes all lexical and grammatical knowledge of a given language (Goldberg ; Hilpert ). CxM assumes the same general architecture for morphological and for phrasal constructions, resulting in a flexible model particularly suited for in-between phenomena, both synchronically and diachronically. At the same time, CxM endorses the view that there are items, properties and regularities specific to the level of the word, which justifies morphology as a separate domain in linguistic description and theory.

. B 

.................................................................................................................................. The basic architecture of what we might call the ‘construction grammar of words’ follows from a number of theoretical assumptions that put it in natural kinship to some theories of grammar and in natural opposition to others. These assumptions can be characterized by the terms -, -, and -. Together, these notions provide the foundations for a well-defined theory of morphological items and the patterns of their interaction.

.. Sign-based CxM is based on the notion that constructions are signs, that is, conventionalized pairings of form and meaning. Form and meaning, which are connected by symbolic correspondence links, are themselves complex bundles of properties: the  level can be divided into a phonological () and a morphosyntactic tier, the latter containing morphological () and syntactic () features, whereas the  level is assumed to include semantic (), pragmatic (), and discourse-functional () information. Therefore, morphological items (like all other constructions) are represented as independent but connected layers of structure, as illustrated in Figure .. This general understanding of the nature of linguistic representations bears testimony to the model’s affiliation with the Parallel Architecture (Jackendoff , , a, b, ; Culicover and Jackendoff ; Jackendoff and Audring, Chapter  this volume), although the Parallel Architecture puts the three layers , /, and  on an equal footing and assigns no special prominence to the symbolic dimension, that is, the – and –/ interface. Assuming the architecture in Figure ., which is widely adopted in the CxG literature, a morphological construction has a number of properties. Let us take the Italian word faro

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Phonological information (PHON) form

Morphological information (MORPH) Syntactic information (SYN)

Semantic information (SEM) meaning

Pragmatic information (PRAG) Discourse-functional information (DISC)

 .. The internal structure of a construction Source: Adapted from Croft (: ).

‘lighthouse’ as an example. The  tier of this word includes the phonemic representation /ˈfaro/, its status as a phonological word (ω) and its syllabic structure (σσ). Morphosyntactic features are encoded within  and : in the case of faro, the syntactic category N will be specified, together with the information that the noun is countable, masculine, and a member of the first inflectional class for Italian nouns (i.e. ‑o/‑i nouns, according to Thornton’s () classification). The meaning part of (morphological) constructions is also composite, since it may contain semantic information (), pragmatic information (), and discoursefunctional information ().  has to do with meaning in the narrow sense, that is, lexical and grammatical meaning associated with words and word formation schemas. In our case,  will specify a denotation () and possibly other more specific information (e.g. the ontological class of faro, i.e. ).  has to do with pragmatic aspects that may be associated with certain words, including connotation (vs. denotation) and speech act level properties. Diminutives (e.g. Italian lettera ‘letter’ > letterina lit. letter-DIM ‘little letter’) are a case in point. According to Dressler and Merlini Barbaresi (b: ), diminutives can have a morphopragmatic feature [non serious] that is used ‘for lowering one’s responsibility towards the speech act being performed, or . . . one’s commitment to its illocutionary force’. In CxM, such features can be encoded in a straightforward fashion, together with genre or register information (if present). For instance, Baroni, Guevara, and Zamparelli () show that some NN compounds in Italian (e.g. raccolta differenziata rifiuti ‘selective waste collection’) are typical of a particular text type called ‘headlinese’ (headlines or telegraphic language in general).3 3

Cf. also Ruppenhofer and Michaelis (2010) on expressions like Contains alcohol (with no subject) as instances of labelese.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

In lexical representations, all these pieces of information are held together by correspondence links, as we will see in §...

.. Word-based A major point of divergence among morphological theories lies in the representation of complex lexical items and their parts. Approaches can be characterized as primarily morpheme-based or word-based. Morpheme-based approaches see complex words as concatenations of morphemes, whereby each morpheme has a lexical entry of its own (a theory of this kind is defended in Lieber ; another is Distributed Morphology, cf. Halle and Marantz ; Siddiqi, Chapter  this volume). By contrast, word-based approaches take the word, not the morpheme, as the smallest lexical entry. This position is assumed by Word-and-Paradigm theories of morphology, for example Matthews (); Anderson (); Blevins, Ackerman, and Malouf (Chapter  this volume); Stump (Chapter  this volume). CxM, too, is a word-based theory. However, it does recognize morphemes as secondarily derived units of analysis—abstractively rather than constructively in the sense of Blevins (). Complex words such as the English agentive noun baker are represented as items stored in their entirety and featuring the same types of properties as simplex words, but with internal morphosyntactic structure, as sketched in Figure .. The CxM literature generally uses a linearized and simplified shorthand notation (merging // into one level of form), as illustrated in () (adapted from Booij a: ):4

phon: ωi ; σσ ; /bεikjәr/i form morph/syn: [[V]αj suff]Nβi

sem: [PERSON who BAKEj]i meaning prag/disc: -

 .. The internal structure of baker

4 The notation is not always consistent in the CxM literature. In this chapter, we will use: (i) capital letter subscripts such as , , , etc. for lexical categories (A=adjective; N=noun; V=verb); (ii) lower case subscripts , , and for indices; (iii) Greek letter subscripts , , and for

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ()



< [[bake]Vαj er]Nβi ↔ [PERSON who BAKEj]i >

Under a CxM understanding, the noun baker means ‘someone who bakes’ not because it is built from a morpheme meaning ‘bake’ and a morpheme meaning ‘PERSON who Vs’, but because there is a formal relation between verbs like bake, paint, or sell on the one hand and nouns like baker, painter, or seller on the other, correlating systematically with a semantic difference. These paradigmatic relations assign internal structure to baker, painter, and seller, which can be captured in a semi-specified construction or , as in () (adapted from Booij a: ): ()

< [[x]Vαj er]Nβi ↔ [PERSON who PREDj]i >

The schema is semi-specified because it consists of a specific suffix on the one hand and an unspecified variable x on the other. The variable is typed for the lexical category V and, potentially, for other formal features (symbolized as ‘α’) that delimit the set of verbs admitted in this construction; such typing serves to constrain the construction (see §..). Replacing the variable by appropriate lexical material yields a novel agentive noun. As () and () illustrate, affixes in CxM are conceptualized as parts of constructions. The index binding the form of an affixed word to its meaning (here ‘i’) is placed on the outer boundaries of each level. This captures the assumption that the semantic contribution of affixes is ‘only accessible through the meaning of the morphological construction of which they form a part’ (Booij a: ). Thus, affixes are not stored on their own and do not have an independent meaning outside the structure they occur in. This is part of CxM being a word-based theory. Schemas such as () above express a generalization that is traditionally captured by Word Formation Rules (WFRs; cf. Aronoff ). Like WFRs, constructional schemas ‘express the generative power of the grammar’ (Booij : ). However, schemas and WFRs are not totally equivalent. One difference is that WFRs are procedural and imply productivity by default, whereas constructional schemas are primarily declarative: they are static generalizations over a set of fully specified items (see §...). This prevents the system from overgenerating non-existing words and word-forms. It is also of relevance from a diachronic perspective, since unproductive word formation patterns may sometimes be reactivated in later stages of the language. Another important difference is that rules are input-oriented, since the output is derived from the input by means of (a number of) operations. Schemas, instead, are output-oriented, which has a variety of advantages, for example when dealing with non-concatenative phenomena such as prosodic, templatic, or subtractive morphology. Finally, rules are usually assumed to be part of ‘the grammar’, while listed words belong in ‘the lexicon’. No such distinction is made in CxM: simple

formal features; (iv) non-subscript letters , , and for variables, whereby lower case letters () stand for unspecified phonological material and capital letters () for an unspecified lexical category. Specified phonological structure within constructions is rendered informally in italics (dog, ‑er) or in phonemic transcription (/dɔg/, /ər/), whereas semantic denotations and semantic operators (such as negation) are in capital letters (cf. DOG and NOT respectively). The arrow stands for the symbolic association between the form part and the meaning part. Angle brackets delimit the construction.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

words, complex words, and morphological schemas are structured according to the same principles, and—crucially—are assumed to be stored in the same repository, the constructicon, which comprises both (what is traditionally referred to as) the grammar and the lexicon. A lexical entry, as Jackendoff puts it, ‘is more word-like to the extent that it is fully specified, and more rule-like to the extent that it contains variables’ (: ). The same holds for syntactic constructions in CxG. Hence, CxM subscribes to the general notion of a lexicon-grammar continuum (Goldberg : ) with lexical items of varying complexity and specificity, ranging from simple words, complex words, MWEs, and lexicalized phrases to morphological and phrasal schemas. It follows that, in CxM, the concepts of ‘word’ and ‘lexical item’ (or ‘lexical unit’ or ‘listeme’) do not coincide. Lexical items, as normally intended, basically coincide with constructions. All words are lexical items (i.e. constructions) but not all lexical items are morphological words. The concept of ‘word’ in CxM is defined mainly in terms of cohesion: ‘ . . . cohesiveness is the defining criterion for canonical wordhood, whereas other properties such as being a listeme (a conventional expression) are clearly not to be seen as defining properties for wordhood’ (Booij b: ).

.. Usage-based The assumption that complex items are stored and their relations are established in the form of schemas readily translates into an account of how morphology is acquired. CxM follows -, or -, models of language (Bybee , ; Tomasello ) by assuming that morphology (and grammar in general) is acquired bottom-up from complex words and phrases encountered in the input and retained in lexical memory. This view of language acquisition allows for inter-speaker variation, since there may be individual differences in the storage of items, the recognition of structure, and the construction of (sub)schemas. It also implies that speakers are sensitive to usage and frequency in building their grammar. This leads to a view of the lexicon as a richly structured environment, equipped with various links between lexical items that give rise to  such as that shown in Figure ..5 This portion of the hierarchical lexicon captures that fact that words like unsteady, unhappy, and unsuitable are connected by virtue of being linked to the un-A < [x[y]Yj]Yi ↔ [SEM [SEMj]]i > | < [un[y]Aj]Ai ↔ [NOT SEMj]i > < [un[steady]A]A ↔

< [un[happy]A]A ↔

< [un[suitable]A]A ↔

[NOT STEADY] >

[NOT HAPPY] >

[NOT SUITABLE] >



 .. Hierarchy for prefixation in English

5

The schema is adapted from Booij (: ). The notation is simplified for the sake of illustration.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



construction, which, in turn, is an instance of an even more schematic construction that represents category-neutral prefixation in English. In constructionist terms, we would say that the higher-level schema is instantiated by a semispecified construction (or subschema) with a prefix un- and an adjectival variable slot. This subschema, in turn, is instantiated by existing un- adjectives (i.e. fully lexically specified constructions). The same subschema may also be instantiated by a newly formed un- adjective that is not yet part of the mental lexicon.6 Although the hierarchy primarily defines vertical relations, items that are found at the same level of the hierarchy and are daughters of the same mother construction are perceived as paradigmatically related, much like co-hyponyms in lexical semantics. At each level of the hierarchy, more specific information is introduced (marked in bold in Figure .) that ‘fills’ the variables contained in the upper-level schema. Hierarchies such as that in Figure . basically represent the lexicon-grammar continuum envisaged by constructionist approaches, since they illustrate how abstract patterns and specific words are part of the very same architecture: they are simply found at different levels in the hierarchy. More abstract schemas have a double role: on the one hand, they dominate fully specified complex words, thus accounting for their shared properties; on the other hand, they specify (or predict) how new complex words can be formed according to that schema. Items that are related vertically share many properties. In CxM, this fact is captured by the assumption that information from higher nodes is  by the lower nodes, by means of so-called Instantiation Inheritance Links (Goldberg ), which is why we refer to structures like Figure . as   (cf. §§.. and ..). Throughout the history of linguistics, inheritance has often been invoked as a way to reduce redundancy in the lexicon (see e.g. Deo ; Flickinger ; Riehemann ; Sag, Wasow, and Bender ; cf. Booij b for a discussion). Since Jackendoff (), the controversy has raged between two theoretical models: the ‘impoverished-entry model’ and the ‘full-entry model’. According to the first, a lexical entry only contains the information that is idiosyncratic and not derivable from a dominating node. All other information can be inherited and is only represented at the source, the highest level in the hierarchy. This guarantees a maximally parsimonious architecture of the mental lexicon. However, there is a growing consensus in linguistic theory that the human memory is vast and may not be geared towards parsimony or redundancy reduction. For example, there is ample psycholinguistic evidence supporting the view that complex morphological items, including regular formations, are stored in the lexicon (see Blom, Chapter  this volume; Gagné and Spalding, Chapter  this volume; Archibald and Libben, Chapter  this volume; Schiller and Verdonschot, Chapter  this volume). In line with such findings, models have been proposed that assume all lexical entries to be fully specified, even though they share properties with schemas or words higher up in the hierarchical lexicon. CxM is

In this respect, we need to distinguish the terms  and  (cf. Goldberg ). Constructs are fully specified expressions used in actual discourse; constructions are signs stored in our constructicon (which may be abstract or partially/fully specified). Typically, abstract constructions in syntax are directly instantiated by constructs (e.g. the Ditransitive Construction is instantiated by sentences like Pat offered Mary a tea). In morphology, abstract constructions are usually instantiated by fully specified constructions (i.e. listed complex words). However, they may also generate constructs: this occurs whenever a new form is creatively coined within actual discourse (nonce formations). 6

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

one such model.7 This matches an exemplar-based approach in which children learn specific words first and build up schemas later by generalizing over them, without throwing away the information on individual complex words they have already stored in their lexical memory (Jackendoff ; Booij b). Thus, CxM assumes that the lexicon contains both schemas capturing generalizations and the individual words that gave rise to them. In this way, CxM avoids the rule/list fallacy (Langacker : ), ‘the unwarranted assumption that linguistic constructs are either generated by rule or listed, and that being listed excludes a linguistic construct from being linked to a rule at the same time’ (Booij a: ).

.. Summing up: the basic architecture To summarize, CxM is a sign-based theory of morphology whose building blocks are constructions. Constructions are form–meaning pairings that vary in complexity and schematicity. The various layers of information encoded in the construction are ‘parallel’ in that each has independent entities and regularities. The various layers are conventionally related to one another. Moreover, CxM is word-based: affixes are not assumed to be standalone lexical entries (e.g. ‑er), but are bound to schemas in which the base the affix attaches to takes the form of a variable (e.g. [V ‑er]N). Schemas are the generative engine in word formation and inflection, whereas fully specified constructions tell us which words are actually instantiated and present in our mental lexicon. Both words and schemas are pieces of linguistic knowledge stored in the constructicon and organized in hierarchies. CxM assumes a high degree of listing. One function of listing is to ‘[specify] the lexical conventions of a language’ (Booij ); thus, the lexicon needs to say that walker is an existing word of English, while the parallel formation *stander is not. In a sense, this may be interpreted as a difference between morphological constructions and syntactic constructions: in morphology, especially in word formation, constraints on productivity require knowledge of which possible complex words actually exist. In general, CxM assumes, in line with cognitive CxG, that every item is listed that displays idiosyncratic behaviour not fully predictable from its parts, or that occurs with sufficient frequency to cause its entrenchment in memory (Goldberg : ). This includes storage of fully regular formations. In certain respects, the constructicon resembles the ‘list of basic irregularities’ that Bloomfield (: ) imagined, since all constructions are Saussurian signs. However, there are many crucial differences between the constructicon and the Bloomfieldian lexicon. First, the constructionist lexicon is not a ‘list’, but a richly structured network that captures (sub)generalizations at various levels. By virtue of these generalizations, it states a number of regularities alongside the irregularities. Moreover, its units are not necessarily ‘basic’, since constructions are not limited to simplex words, but include complex units, both word-sized and larger, be they fully specified or in the form of abstract schemas (or both). Indeed, there are constructions that are fully specified but are not words (e.g. sentence-level MWEs like proverbs and sayings), and there are word-sized constructions that are not fully specified, that is, abstract schemas for inflection and word formation.

This is in line with cognitive CxG (Goldberg ), whereas other constructionist models adopt an impoverished-entry view (e.g. Kay and Fillmore ; cf. also Sag, Wasow, and Bender ). 7

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



. T 

.................................................................................................................................. The basic architecture is enriched by a number of theoretical notions that conceptualize the relations in the constructionist network, the ways to build new constructions, and the properties of morphologically complex words.

.. Default inheritance The relationship between a schema and its instantiations can be modelled in a hierarchical lexicon by means of  . As we have seen in Figure ., fully specified constructions (in this case, complex words) are daughters of a more general schema and inherit the properties of that schema. They are also linked to their base if this base is a free word. In fully regular processes, all relevant information passes on to the specified complex word (as we have seen, CxM is a full-entry model, so information is redundantly encoded at all levels of specification). In some cases, however, individual complex words may have properties not predicted by the properties of the schema they instantiate. Default inheritance is a mechanism that allows for this possibility, since under this condition the properties of higher-level constructions may be overridden whenever a more specific (possibly contradictory) property is encountered in a lower-level construction. This produces constructions with idiosyncrasies.8 Consider, for example, Dutch deverbal adjectives in ‑baar ‘‑able’ (cf. Booij a: ; Booij : –). Normally, this word formation process requires transitive verbs as bases (e.g. eetbaar ‘edible’, drinkbaar ‘drinkable’) (cf. (), which contains the form part of the construction only). However, intransitive verbs may occasionally be inserted into the ‑baar schema, such as werk ‘to work’ (werkbaar ‘feasible, practicable’), thus overriding the requirement of the higher-level schema. Despite this exceptional fact, the adjective werkbaar can still be hosted under the ‑baar schema to which it belongs. ()

[[x]Vαj baar]Aβi where V=

The notion of default inheritance has the advantage of enabling us to specify the regular properties of a set of complex words, while at the same time allowing for exceptional properties without requiring a complex architecture of classes of exceptions, as would be necessary in a theory of inheritance without default override. In addition, by making the properties of abstract constructions defeasible, default inheritance allows for subschemas in which a particular property of the mother schema is systematically overridden for a certain class of words (but see Booij b for the limitations on defeasibility and override). Such subgeneralizations are rampant in morphology.

8

For recent discussions of default inheritance see Gisborne and Hippisley ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

.. Connectivity and its functions Instantiation Inheritance Links (II) connect schematic constructions to progressively more specified constructions and, as such, are an indispensable tool for CxM. However, they are not the only links available. Following Goldberg (: –) we may also find: (i) Subpart Links (IS), which are posited when ‘one construction is a proper subpart of another construction and exists independently’; (ii) Polysemy Links (IP), which ‘capture the nature of the semantic relations between a particular sense of a construction and any extensions from this sense’; and (iii) Metaphorical Extension Links (IM), which are posited when ‘two constructions are found to be related by a metaphorical mapping’. The main function of lexical connectivity is , the reduction of arbitrariness of form and meaning in the daughter words (Booij b). While inflectional and derivational schemas motivate the structure of complex lexical items, including the mapping of function to form, sense extension schemas motivate words semantically. Obviously, being a motivated linguistic sign is a gradient property which correlates with the degree to which the properties of the schema are preserved in the daughters; hence, motivation is often partial (Booij and Audring ; Audring, Booij, and Jackendoff ).

.. Unification Unification is a combinatory mechanism that merges a construction with another construction. It can be used: (i) for filling open slots (i.e. variables) with lexically specified material; or (ii) for merging one schema into another. The first use guarantees the possibility to form linguistic expressions from (semi‑) specified schemas. Thus, as Figure . illustrates, a schema with a variable (here ‘x’) is unified with a novel word (e.g. zero-task ‘to deliberately do nothing’, taken from the website www.wordspy.com) whose properties match with those of the variable (Vα), in order to form a complex word (zero-tasker). Once institutionalized, zero-tasker becomes a daughter of the V‑er schema, to which it is connected via an Instantiation Inheritance Link. The second use—merging schemas—is crucial to account for those morphological processes that apparently apply to possible but non-existing input words, as originally proposed by Booij (a: ) for English complex adjectives of the un-V-able type (e.g. unbeatable). A subset of parasynthetic verbs in Italian of the de-V-izzare type is another case in point (Masini and Iacobini ): a verb like derattizzare ‘to get rid of rats’ is problematic because it seems to presuppose a phase of the derivation in which the reversative prefix de- attaches to the well-formed but non-existing verb  rattizzare ‘to provide with rats’. In order to account for these formations, we unify the two schemas for de- prefixation and -izzare suffixation, as illustrated in Figure ., giving rise to another < [[x]Vαj er]Ni ↔ [PERSON who PREDj]i >

< [zero-task]Vαj ↔ [DO_NOTHING]j >

< [zero-tasker]Ni ↔ [PERSON who DO_NOTHING]i >

 .. Unifying a schema with a word

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  [de [x]V]V



[[y]N/A izzare]V

[de [[y]N/A izzare]V]V | [derattizzare]V

 .. Unifying a schema with another schema

semi-specified construction. The latter can then be instantiated by expressions like derattizzare without having to postulate the existence of  rattizzare at any stage of the hierarchy. The same kind of analysis can be used to account for other types of processes involving well-formed but non-existing words. Crucially, merged constructions can be productive in their own right, independent of the productivity of the individual constructions they are composed of. This is another argument in favour of schemas instead of rules, since a complex rule cannot be more productive than its component rules.

.. Non-compositionality and headedness Two reasons for positing a particular construction are conventionalization and noncompositionality; therefore idiosyncrasy is an expected property of linguistic expressions, at least if we think of compositionality as the process by which the overall meaning of an expression is computed from combining the meanings of its constituting parts. In CxM, the piece of meaning that cannot be computed from the parts is often to be identified as the semantic contribution of the construction itself.9 For example, consider full reduplication in Afrikaans, which is often used to express a general meaning of ‘increase’ (Botha : –, see example (a)). This meaning cannot be derived from the constituents of the construction. Rather, it belongs to the construction itself, as emerges from (b) (cf. Booij a: –), where INCREASE is not co-indexed with any constituting part. In other words, INCREASE is a constructional property. ()

a. Bakke-bakke veldblomme versier die tafels. bowls-bowls wild_flowers decorate the tables ‘The tables are decorated with wild flowers by the bowlful.’ b. < [[x]Yαi [x]Yαi]Yβj ↔ [INCREASE SEMi]j >

Another instance of non-compositionality that can be reinterpreted as a holistic, constructional property is exocentricity in compounding (cf. Booij a: –). Exocentric compounds (see e.g. Bauer ) lack an internal head, whose role would be to assign the lexical category, relevant morphological features (e.g. gender), and a certain denotational meaning to the output. Yet, such compounds often display a systematic structure. A famous example

9 In this sense, one could say that constructions are compositional after all, at least when the meaning contribution of the construction as a whole is clearly identifiable: the latter plus the meaning of the parts predicts the meaning of the whole expression (cf. also Michaelis ; Booij and Masini ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

is Italian VN compounds (cf. also §...), whose properties can be captured by the schema in (a), with two examples in (b): ()

a. < [[x]Vαk [y]Nβi]Nγj ↔ [{AGENT|INSTRUMENT} that PREDk [SEMi]]j > b. acchiappa-fantasmi catch-ghosts ‘ghostbuster’ tosta-pane toast-bread ‘toaster’

Again, we see how the construction as a whole imposes properties (here the AGENT| INSTRUMENT meaning) beyond the properties of the parts. Exocentricity leads us directly to a hotly debated issue in morphology: headedness. Exocentric compounds lack a head by definition, and this is dealt with quite straightforwardly in CxM, as we have just seen. But how is the head encoded in endocentric constructions? Consider the schemas in (). ()

a. < [un [x]Aαj]Aαi ↔ [NOT SEMj]i > unfair, unreal, . . . b. < [[x]Aαi ness]Nβj ↔ [STATE of being SEMi]j > sadness, craziness, . . .

Normally, for words like these, we would say that the head is on the right: it coincides with the base in (a), with the nominalizing suffix in (b). In the CxM notation, however, the head is not overtly marked.10 We can identify the rightmost constituent as the ‘head’ by looking at the features and indices within the construction: in (a) the category A and the feature bundle α are associated with both the input adjective and the output form, and the only denotational meaning we have (SEM) is co-indexed, again, with the input adjective (‘j’). In (b), however, input and output features do not match (Aα vs. Nβ), and the meaning contains a denotation (STATE) that is not co-indexed with any part of the input. The suffix has no index and no features. Therefore, headedness is represented as a constructional property, as in exocentric compounds: the information usually associated with the head (lexical category, morphosyntactic features, semantic properties) is recovered from the suffixation construction as a whole. CxM proves useful in modelling cases where, in other accounts, we would have to posit a double or ‘split’ head. For instance, the Italian coordinate compound bar-pasticceria (lit. ‘bar-pastry_shop’) is semantically double-headed, since it identifies a place that is both a bar and a pastry shop. However, the features of the output N match bar (Nα), not pasticceria (Nβ) (cf. ()), since bar-pasticceria is masculine, like bar, not feminine like pasticceria. Thus, the compound is formally left-headed. Both properties are specified within the construction, without having to advert to the notion of a head, let alone a split head. ()

< [[bar]Nαk [pasticceria]Nβi]Nαj ↔ [OBJECT that is both SEMk and SEMi]j >

Summing up, the ‘head’ in CxM is not a structural notion, that is, there is no constituent marked as such. Rather, the relevant information is carried by the construction itself,

10

What follows is based on Fábregas and Masini (). For further discussion on headedness in CxM, see Arcodia ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



in the form of matches and mismatches between input and output features, captured by co-indexation. This tells us two things. First, no primacy is given either to endocentricity or exocentricity. Both are possible, and, in fact, attested (and productive) in the languages of the world. Second, CxM is able to deal with form–meaning mismatches in a straightforward fashion, by virtue of its architecture; we will come back to this issue in §..

.. Constraints We may envisage two types of constraints in CxM: (i) constraints that operate within (partially) schematic constructions by telling us how (i.e. by which classes of items) they can be instantiated, and (ii), particular shape conditions that the instantiating material is subject to. Both types can be illustrated with the following examples from Modern Greek (cf. Ralli ). In (a) we have an AN phrasal noun (see §..); in (b) we see its corresponding (suffixed) relational adjective: ()

a. [Ai- Nk-]NP psixr-os polem-os ‘cold war’ trit-os kosm-os ‘third world’

> > >

b. [Ai-o-Nk--]A psixr-o-polem-ik-os ‘cold-war-like’ trit-o-kosm-ik-os ‘third-world-like’

Both (a) and (b) have a variable that accepts adjectives. This is a constraint of the first type, a    . Such constraints are reminiscent of restrictions on base selection within Word Formation Rules: affixes occur with bases belonging to a specific word class (or subclass). In addition, Ralli (: ) states that ‘in order for a [A N] construction to become a derived item, the adjectival member must be a stem. This stem accepts only one inflectional suffix which “closes” the structure. . . . in most cases, a compound marker ‑o appears between the adjectival constituent and the noun constituent.’ The very same constraint— named ‘Bare Stem Constraint’ by Ralli ()—is also found in Russian, as we will see in §... This is a constraint of the second type; we may call these kinds of constraints  . Let us take another example: the Italian suffix ‑mente, which forms manner adverbs from adjectives (e.g. lento ‘slow’ > lentamente ‘slowly’). First, this construction is subject to a constructional requirement, since the base adjective appears in its feminine form (lenta-mente, and not *lento-mente). Second, despite its great productivity, ‑mente is far from being unrestricted (Scalise ; Ricca ). It places various constraints on the base, both morphological (e.g. it cannot attach to diminutives: freddo ‘cold’ > freddino ‘coldish’ > *freddinamente lit. cold-ish-ly) and semantic (e.g. it cannot attach to adjectives denoting physical properties: brutto ‘ugly’ > *bruttamente ‘uglily’). These (and possibly other) restrictions can be encoded within the ‑mente schema (cf. ()), thus delimiting its instantiation possibilities and restraining overgeneration. ()

< [[x]Aαi mente]Advβj ↔ [in a MANNER SEMi]j > where: Aα = ¬  SEMi = ¬ _

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Schemas are able to express restrictions at all levels (phonological, morphological, semantic, etc.) simultaneously, since they contain all information at the same time rather than calling it up one level after the other in a sequential fashion (cf. Booij and Audring ). Moreover, independently of specific restrictions, morphological schemas are already inherently constrained by the fact that structure is associated with a given function, that is, a given meaning, which drives as well as restrains the creation of new words. This separates CxM from theories such as Distributed Morphology, where form is built independently of meaning, and meaning is associated to a given form later. Finally, as we saw in §.., the properties of a construction can be overridden by a more specific construction through default inheritance. This contributes to a creative use of language: speakers may (more or less intentionally) ‘stretch the rules’ in order to produce a given effect or because they simply do not feel the constraints as mandatory. One mechanism involving defeasible constraints and properties is , that is, the process by which a particular form is ‘forced’ into an unusual interpretation by virtue of its occurring in a particular construction (e.g. Three beers, please!, where three beers is interpreted as ‘three glasses/bottles of beer’, by virtue of using a mass noun like beer in a plural morphological construction, and hence in a ‘countable’ context; cf. Hilpert : ). As Michaelis (: ) puts it: ‘If a lexical item is semantically incompatible with its morphosyntactic context, the meaning of the lexical item conforms to the meaning of the structure in which it is embedded.’ The fact that constructions themselves have a meaning, and that the properties of constructions may be defeasible, is crucial to explain such cases.11

. M 

.................................................................................................................................. In this section, we briefly illustrate how CxM models word formation and inflectional processes, highlighting the ways in which a constructionist approach is useful to account for linguistic data, including problematic cases. Both word formation and inflection are seen as involving schemas, whereby a schema is a ‘declarative statement’ (Booij : ) that characterizes a set of words formed according to a certain pattern. At the same time, the schema is the recipe for new forms of this type. In this way, schemas are a ‘type of morphological knowledge [that] can be used both in language perception and in language production’ (Booij : ). This means that the term  for the base of a derived or inflected word and the term  for the complex word itself is not meant in the literal procedural sense here.

.. Word formation ... Derivation Derived words created by means of affixation are expressed as instantiations of derivational (sub)schemas. Schemas stipulate requirements and constraints on the input, as well as the properties of the output. We have already seen a number of derivational schemas and 11

For further discussion on CxM and coercion see Audring and Booij ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



subschemas in this chapter, for example V-er, A-ness, and un-A affixation in English; A-mente suffixation in Italian; V-baar suffixation in Dutch. All these derivational processes are fairly productive. Hence, the hierarchy we expect them to develop is similar to the one proposed for English un-A adjectives (cf. Figure .): an ‘open hierarchy’ ready to host new daughter constructions under the mother schema. However, as already mentioned in §.. (with respect to the difference between constructional schemas and WFRs), schemas may also be posited when they serve as a ‘static generalization’ over a fixed set of items. In this case, we still have a hierarchy, but a ‘closed’ one, since the schema is not expected to be instantiated by more complex words. Take for instance deverbal nouns with the suffix ‑ita in Italian, whose schema can be represented as follows: () < [[x]Vαi ita]Nβj ↔ [{EVENT|RESULT} of PREDi]j > where: α = second inflectional class (-ere verbs) β = feminine gender, second inflectional class (-a/-e nouns)12 The schema states that the lexical material it takes as input needs to be a verb with the properties α. Its output is a noun with the properties β (both α and β features are encoded under ). Semantically, the verbal input contributes its lexical semantics in the form of the function PREDi, while the schema as a whole provides the complex word with an EVENT|RESULT meaning in relation to PREDi. A full notation would include, under , the stress pattern of the output, since ‑ita nouns are stressed on the first syllable, irrespective of the stress pattern of the input.13 The V-ita schema is instantiated by some ten words in Italian (according to Gaeta : )—among which: crescita ‘growth’, nascita ‘birth’, perdita ‘loss’, vendita ‘sale’, and vincita ‘(a) win’—but the pattern is unproductive and does not normally produce any new nouns. Thus, the schema simply expresses a generalization about existing words, but does not serve as a template for new formations (cf. also §..). Apart from canonical (i.e. base+affix) cases of derivation, CxM poses schemas and subschemas for other types of phenomena, such as conversion, subtractive morphology, and paradigmatic word formation. These are unproblematic in CxM because schemas are output-oriented and the resulting structure does not need to show isomorphism between form and meaning. Noun-to-verb conversion in English (e.g. Google!to google) can be represented as the schema in (), where there is no affix: ()

< [[x]Nαi]Vβj ↔ [EVENT related to SEMi]j >

Hypocoristics are a typical example of subtractive morphology. According to Booij (a: ), ‘there is a morphological construction schema for proper names in which the semantic representation is enriched with a semantic or pragmatic property’ and the truncation is ‘modelled as the mapping of the phonological form of the input name onto

12

Cf. Thornton (). The ability to specify the stress pattern of the output illustrates an advantage of output-oriented schemas above input-oriented rules, mentioned in §... 13

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

a specific prosodic template’, which in English and German, for instance, may correspond to ‘a trochaic foot ending in [i]’: () a. [(x)σ(y-i)σ]Nj b. Camille Cammie c. Gabriele Gabi

[English; from Lappe : ] [German; from Downing : ]

Modelling the relationship between the full name and the nickname requires an additional tool that expresses   between constructions. This tool is useful for a great number of phenomena, among which are truncation and other instances of subtractive morphology. What is represented in () only captures a part of the schema for hypocoristics: the meaning and the phonology of the shortened name are bound to the meaning and the phonology of the full name, so we need to introduce that, too. This is achieved by linking the two constructions via a paradigmatic relation symbolized by ‘’, as illustrated in (), where ‘xyz’ stands for a sequence of phonological material and EVAL is a semantic operator for evaluation, which encompasses endearment. ()

< [xyz]Ni ↔ [SEMi] >  < [(x)σ(y-i)σ]Nj ↔ [EVAL [SEMi]]j > where N = proper name

Patterns like () are called -  by Booij and Masini (), following Nesset’s () terminological proposal. Second-order schemas can be employed to account for a number of semantics-morphosyntax mismatches (see also §..). One example is affix substitution, exemplified in (a). The point here is that ‑ist words are semantically linked to ‑ism words, not to their simplex root: a socialist is someone who adheres to socialism, not someone who is social. Aronoff ’s () solution for these cases was positing a ‘truncation rule’ (e.g. [[alpin-ism]-ist]), whereas CxM exploits paradigmatic relations, as shown in (b). () a. alpinism alpinist socialism socialist b. < [x-ism]Ni ↔ [SEMi] >  < [x-ist]Nj ↔ [PERSON involved in SEMi]j > According to Booij (: ), the main advantage of the representation in (b) is that it has no inherent direction: we do not have to pick an underlying form from which the other is derived, the coining of new forms may go in both directions. We will get a clearer view of the power and usefulness of second-order schemas when we discuss inflection (§..).

... Compounding Compounding can be expressed in terms of schemas, too, as we have already seen in §.. (examples () and ()), where we addressed the issue of exocentricity and headedness. Let us now take, for further exemplification, the general schema for right-headed compounding (adapted from Booij a: ), where right-headedness is represented by the fact that

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



< [[x]Xαk [y]Yβi]Yβj ↔ [SEMi with relation R to SEMk]j > a. < [[x]Ak [y]Ai]Aj ↔

c. < [[x]Nk [y]Vi]Vj ↔

[SEMi with relation RATT to SEMk]j > | < [dark blue]A ↔

b. < [[x]Xk[y]Ni]Nj ↔

[SEMi with relation RSUB to SEMk]j > | < [headhunt]V ↔

[DARK_BLUE] >

[SEMi with relation R to SEMk]j >

[HEADHUNT] >

< [[x]Nk[y]Ni]Nj ↔ [SEMi with relation RSUB to SEMk]j > | < [bookshop]N ↔ [BOOKSHOP] >

< [[x]Ak[y]Ni]Nj ↔





[SEMi with relation RATT to SEMk]j > | < [blackboard]N ↔ [BLACKBOARD] >

 .. Hierarchy for compounding in English (partial)

the rightmost constituent shares its category with the whole compound (‘Y’) and that the whole compound is a kind of SEMi (co-indexed with the constituent on the right): ()

< [[x]Xαk [y]Yβi]Yβj ↔ [SEMi with relation R to SEMk]j >

This schema is extremely general and may be instantiated by a number of subschemas depending on the options allowed in a given language. For example, English displays (among others) the subschemas shown in Figure .. The presence of these subschemas allows for the specification of properties that only apply to a subset of forms, such as input/output categories, or the type of semantic relation R14 (if relevant). Productivity is also an issue: in English, nominal compounding (subschema ‘b’), which is in turn instantiated by a number of subschemas, is very productive, whereas verbal compounding (subschema ‘c’) is not (Lieber b). Nonetheless, we still have some instances of verbal compounds, such as to babysit or to headhunt, which are said to be backformations (Booij : ). On the one hand, we must state that this little portion of English compounding is not productive (the hierarchy developed by this subschema is ‘closed’; cf. §...), unlike the rest, but at the same time we should encode the regular properties of these expressions, for example the fact that they are right-headed and conform to the general schema in (). On the same line, the presence of subschema ‘b’ does not imply that all XN compounds are necessarily right-headed in English: it only means that we have an abstract construction that can productively form new XN rightheaded compounds with a given semantics. Instances that do not comply with this generalization (e.g. pickpocket) will be handled by default inheritance, that is, by overriding the general properties in specific instances. The same tools can account for reduplication, as we saw in §.. (example ()). In particular, subschemas capture specific reduplication patterns that depend on the nature of the reduplicated element, as in Afrikaans (Botha : –). For instance, when a (plural) 14

Abbreviations: ATT=attributive; SUB=subordinate; COOR=coordinate (cf. Scalise and Bisetto ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



     < [[x]Yαi[x]Yαi]Yβi ↔ [INCREASE SEMi]j >

< [[x]N(=pl)i [x]N(=pl)i]Nj ↔

< [[x]Aαi [x]Aαi]Aβj ↔

[HIGH NUMBER OF SEMi]j >

[VERY SEMi]j >

< [bottels bottels]N ↔

< [dik dik]A ↔

[HIGH NUMBER OF BOTTLES] >

[VERY THICK] >



 .. Hierarchy for reduplication in Afrikaans (partial)

noun is reduplicated, we obtain the meaning ‘high number of N’, while a reduplicated A means ‘very A’. The general reduplication schema and both subschemas are given in Figure . (cf. Booij a: ).

.. Multi-word expressions Along with derivation and compounding, CxM accounts for other phenomena that are not strictly morphological, but straddle the boundary between morphology and syntax. Indeed, CxM has originated from work on such phenomena, in particular separable complex verbs in Dutch (cf. ()), which, in the early days of the theory, were treated as ‘constructional idioms’ (Booij a, following Jackendoff : ) and instances of ‘periphrastic word formation’ (Booij b). ()

< [[door]Pαi [x]Vβj]V'k↔ [CONTINUE SEMj]k > e.g. dóoreten ‘to go on eating’

The interesting property of these constructions is that they cannot be considered ‘words’ in the proper sense, since they are separable under certain conditions (thus breaching Lexical Integrity), but at the same time they have the same ‘naming’ function as complex lexemes (i.e. they lexicalize complex concepts) and happen to have a unitary, conventionalized semantics. The existence of such items (with phrase-like form and lexeme-like function) would be regarded as a special case or even anomaly in a strictly modular approach to the grammar, but are easily accounted for in a constructionist framework, where lexical items larger than morphological words can be analysed as constructions in which a given form is matched with a given meaning.15 Verb–particle constructions are just one type of so-called multi-word expressions16 (MWEs), and more specifically of   (Booij a, a; Masini , ), which may belong to different lexical categories. In the nominal domain, we have a 15

Verb-particle constructions have been analysed as constructions in various other languages, including English (e.g. look up, take off; cf. Cappelle ) and Italian (e.g. andare avanti ‘to go on’, fare fuori ‘to kill, eliminate’, lit. make out; cf. Masini , ; Iacobini and Masini ). 16 MWE is an umbrella term encompassing a large set of objects, such as idioms, collocations, complex nominals, complex predicates, etc. (cf., e.g., Baldwin and Kim ; Hüning and Schlücker ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



variety of phrasal nouns/names across languages, such as those exemplified below from Russian (a) (Masini and Benigni ) and Italian (b) (Masini ). () a. železnaja doroga ‘railway’ (lit. ironADJ road) b. casa di cura ‘nursing home’ (lit. house of treatment)

[Russian] [Italian]

Although they look like noun phrases, these items display some degree of internal cohesion and fixedness (which may vary across types) that keeps them apart from free phrases. For instance, they resist interruption (e.g. casa di cura rinomata ‘renowned nursing home’ vs. *casa rinomata di cura lit. home renowned for treatment) and are paradigmatically fixed: internal constituents cannot be substituted with a (near‑)synonym (e.g. *železnyj put’ lit. ironADJ way). Crucially, expressions such as those in () are not just the result of diachronic lexicalization, but are formed productively in Russian and Italian: in CxM, this can be captured by positing an abstract schema that encodes the shared properties of these expressions, as sketchily illustrated in (). Note that the function part of these schemas is very similar to that of compounds (§...). () a. < [[x]Aαk [y]Nβi]N'βj ↔ [SEMi with property SEMk]j > b. < [[x]Nαk [[z]P [y]Nβi]]N'αj ↔ [SEMk with relation R to SEMi]j >

[Russian AN] [Italian NPN]

Since phrasal lexemes are constructions, exactly like words and word formation schemas, we expect them to interact with the latter. And indeed, they do. For instance, phrasal lexemes may ‘feed’ word formation by acting as a base for the creation of complex (morphological) words.17 Verb–particle constructions are a case in point, as illustrated by examples such as English break-in-able (from to break in, cf. Miller : ) and Dutch aan-vall-er ‘attacker’ (from aan-vallen ‘to attack’, lit at-fall, cf. Booij : ). Russian phrasal nouns may also undergo word formation (Masini and Benigni : ). For instance, železnaja doroga ‘railway’ (cf. (a)) can be turned into the relational adjective železn-o-dorož-nyj ‘related to railways’ (lit. ironADJ_STEM--roadSTEM-) (cf. also example ()). Semantically, železnaja doroga is the base of železnodorožnyj. Formally, a number of changes occur (as a consequence of the Bare Stem Constraint proposed by Ralli  for Modern Greek, cf. §..), namely: the adjective železnaja turns into a stem and a linking vowel ‑o- shows up to connect this stem to the subsequent noun. Another example of word–MWE interaction—and of form–meaning mismatch—is a classical issue in morphological theory, namely bracketing paradoxes: see Spencer’s () famous example generative grammar > generative grammarian, where ‑ian formally attaches to grammar but has scope over the whole phrase generative grammar. In Italian things are even more complicated, since the affix attaches to the left (head) constituent, thus splitting the phrase in two: flauto barocco ‘baroque flute’ > flautista barocco ‘baroque flutist’. Booij and Masini () suggest handling these cases with second-order schemas

17 Another way in which phrasal lexemes and morphological words interact is competition in the expression of complex meanings: apparently, stored phrasal lexemes can block word formation and vice versa. For a discussion, see Booij (a); Hüning and Schlücker (); Masini (forthcoming).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

(§...), which establish a paradigmatic relationship between the two phrasal noun constructions. For the Italian example, the following (simplified) notation may be proposed (cf. Masini and Iacobini ): ()

< [[x]Nk [y]Ai]N'j ↔ [SEMk with property SEMi]j >  < [[[x]Nk ista]Nw [y]Ai]Nz ↔ [SEMw that has to do with SEMj]z >

Although the affix formally attaches to [x]Nk, the cross-reference between the two schemas obtained via indices—which allows the semantics of the second schema to refer to the meaning of the first schema (SEMj)—guarantees that the correct semantics is computed out of the phrasal nouns, despite the form–meaning mismatch.

.. Inflection For the treatment of inflection, CxM proposes a fruitful combination between a constructionist approach and the Word-and-Paradigm (WP) model advocated by Blevins () and Ackerman, Blevins, and Malouf () (cf. Blevins, Ackerman, and Malouf, Chapter  this volume). From the perspective of CxM, ‘the morphosyntactic properties of each word form in a paradigm are best considered as constructional properties, as properties of the word form as a whole’ (Booij : ). Consequently, inflectional affixes are represented as parts of words rather than in isolation. The construction in (a) illustrates this with the Italian word gatto ‘cat’ which is related to the more abstract schema (b). For the sake of clarity we notate phonology and morphosyntax separately. ()

a. < [gatt-o]ωi ↔ [N]i, ↔ SING[CATi] b. < [x-o]ωi ↔ [N]i, ↔ SING[SEMi]

In addition, CxM assumes listed word forms, which accommodates the WP notion of  , that is, fully inflected forms that identify a word as a member of a particular paradigm. The necessity for such forms is most apparent in languages that have inflectional classes, as class membership often cannot be computed from other properties such as stem shape or semantics. As a consequence, each word needs to be listed with sufficient forms to identify its allomorphy pattern. Blevins () assumes that a combination of principal parts and stored exemplary paradigms allows for all other inflectional forms of a word to be computed. This is a desirable outcome, in particular for languages with rich inflectional systems where a full listing of all forms is not realistic. In CxM, the role of the exemplary paradigms is played by schemas, and paradigms are construed as sets of relations between them. For example, the so-called first inflectional class in Italian (‑o/‑i nouns, cf. §..) can be represented as in () (from Booij a: ). ()

< [x-o]ωi ↔ [N]i, ↔ SING[SEMi] >  < [x-i]ωi ↔ [N]i, ↔ PL[SEMi] >

As in previous examples, the symbol ‘’ indicates the relatedness between two schemas: one for singular nouns ending in /o/ (e.g. gatt-o ‘cat-’) and one for plural nouns ending

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



in /i/ (e.g. gatt-i ‘cat-’). The co-indexation captures the fact that the nouns on both sides of the second-order schema are the same. Such relations are not necessarily binary: all members of a paradigm are seen as linked in this fashion. Again, the notation is true to the model’s word-basedness: inflected forms are represented not primarily as concatenations of stems and affixes, but as a paradigmatic, replacive change between one word form and the other. This also enables the model to express non-concatenative relations such as the German umlaut plural () or indeed forms involving both concatenation and umlaut (). ()

< [Vater]ωi ↔ [N]i, ↔ SING[FATHERi] >  < [Väter]ωi ↔ [N]i, ↔ PL[FATHERi] >

() < [Buch]ωi ↔ [N]i, ↔ SING[BOOKi] >  < [Büch-er]ωi ↔ [N]i, ↔ PL[BOOKi] > The general setup has the advantage of accommodating the fact that certain forms have certain functions only in contrast to other forms and their functions. For example, the Italian suffix ‑e is a plural suffix in (a) but a singular suffix in (b). This means that form–function pairings are relational: ‑e is plural when ‑a is singular (strad-a ‘street-’, strad-e ‘street-’), and singular when ‑i is plural (student-e ‘student-’, student-i ‘student’). Simpler systems can allow for simpler statements in which a form can be functionally identified in isolation (e.g. ‘‑s is plural’), but the theory has to anticipate more complex relations. ()

a. < [x-a]ωi ↔ [N]i, ↔ SING[SEMi] >  < [x-e]ωi ↔ [N]i, ↔ PL[SEMi] > b. < [x-e]ωi ↔ [N]i, ↔ SING[SEMi] >  < [x-i]ωi ↔ [N]i, ↔ PL[SEMi] >

An individual lexeme is linked to its paradigm by means of stored inflected forms. A fully functional system involves storing sufficient or sufficiently informative instances to situate each lexeme unambiguously in the grid of paradigmatic relations.18 As Blevins () points out, an advantage of such an architecture is that it does not require diacritic markers indicating inflectional class; the relational architecture takes care of the affiliations. Since the stored forms in a paradigm are constructions and constructions can be morphological as well as phrasal, the model requires no special machinery to accommodate , that is, paradigm cells filled by a combination of words rather than a single inflected form (cf. also Sadler and Spencer ; Ackerman and Stump ; Chumakina and Corbett ). Also, the central role of listedness provides a natural theoretical space for , which is nothing but a listed form in a paradigm cell. The only major difference between a suppletive and a regular form is that the regular form can be stored while the suppletive form has to be stored. Also, a higher degree of suppletiveness implies a lower degree of motivation (see §..).

18 In reality, speakers may have occasional gaps in their lexical inventory, resulting in uncertainty about what form to use, in which case they may resort to a common or default pattern; this is what drives the regularization of infrequent irregular words.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

. S 

..................................................................................................................................

.. Language change CxG is well suited to model language change, as testified by recent contributions such as Bergs and Diewald (), Hilpert (), Traugott and Trousdale (), and Barðdal et al. (), and CxM is consonant with it. Typical grammaticalization or constructionalization issues have also been addressed within CxM, such as the rise of affixes from compound constituents (see e.g. the emergence of the English suffixes ‑hood, ‑dom, and ‑ship; cf. Trips ). However, the development into a full-fledged affix is just one (extreme) possibility; other (intermediate) developments can occur, leading to affixoids. For instance, the form hoofd in Modern Dutch is used with both the meaning ‘head’ and the meaning ‘main’ (cf. Booij and Hüning ). The latter use is bound to the occurrence of hoofd as a first constituent of NN compounds (as in hoofdpersoon ‘main character’). This situation can be described in CxM by stating that Dutch developed a subschema for NN compounds which presents two properties that override the abstract schema for NN compounding, as illustrated in (): the first constituent is a lexically specified affixoid (hoofd), and the meaning contains the element ‘main’. Note that the formation of compounds with hoofd in its literal meaning (e.g. hoofdpijn ‘headache’) is not ruled out: such a compound directly instantiates the general schema for NN compounds. ()

< [[x]Nk [y]Ni]Nj ↔ [SEMi with relation R to SEMk]j > < [[hoofd]Nk [y]Ni]Nj ↔ [MAIN SEMi]j >

< [hoofd-pijn]N ↔

< [ hoofd-ingang]N ↔

< [ hoofd-persoon]N ↔

[HEAD-ACHE] >

[MAIN ENTRANCE] >

[MAIN CHARACTER] >

Hence, whenever a systematic subpattern with an unpredictable, new meaning emerges, CxM can make use of subschemas which state the relevant generalizations and specify the properties that override those on higher levels in the hierarchical lexicon. Such subschemas thus contribute to the (re‑)motivation of complex words.

.. Productivity A general problem that haunts many theories is productivity, and CxM is no exception. In CxM, new words are created by instantiating a (semi-)abstract construction via unification (§..) with one (or more) words. Obviously, not all (semi-)abstract constructions have

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



the same degree of productivity, that is, not all of them are equally instantiated or likely to be instantiated (depending on a number of factors, among which are constraints on variables, base availability, communicative needs, competing schemas, etc.). An advantage of the machinery of CxM (i.e. hierarchies and default inheritance) is that it can have related schemas endowed with different degrees of productivity. In particular, it allows an account for so-called morphological  (Aronoff a). Let us take the wellknown case of the rival affixes ‑ness and ‑ity in English: Aronoff and colleagues (cf. Aronoff and Anshen ; Anshen and Aronoff ) showed that ‑ness is more productive than ‑ity overall, but that ‑ity is preferred with certain types of bases, such as those ending with suffixes ‑al (duality), ‑i/able (feasibility), and ‑ic (telicity), which therefore constitute a productive niche. The productivity of ‑ity is therefore enhanced with this subset of bases: this might justify different subschemas of the ‑ity schema which have a stricter constraint on possible bases but a higher productivity index. Note that the mere existence of a (semi-)abstract schema does not automatically imply that the schema is productive: schemas may simply serve as static generalizations over sets of items, thus governing ‘closed’ hierachies. Remember the case of Italian suffix ‑ita (cf. §..., example ()), which is responsible for a bunch of stored deverbal nouns (with coherent properties), but does not create new words. In this case, the reason why the schema is not productive may be that the base verb belongs to the second inflectional class (‑ere verbs), which is smaller and not productive (hence, we see a mix of constraints and of base availability). Finally, another important point is that, in CxM, idiomaticity and productivity do not exclude each other, but go hand in hand in a number of constructions. In other words, productivity does not necessarily imply compositionality, which is the key concept behind the notion of ‘constructional idioms’ (cf. Jackendoff ; Booij a). For instance, in Dutch NN/NA compounds, the left (modifier) constituent may develop a more abstract meaning of ‘intensification’. See (a) for examples with the noun reuze (from reus ‘giant’ plus linking element ‑e) followed by an adjective. ()

a. reuze+A: reuze-aardig ‘very kind’, reuze-leuk ‘very nice’, etc. b. < [[reuze]Nk [y]Ai]Aj ↔ [VERY SEMi]j >

The overall semantics is not strictly compositional, but the construction is productive (not unlike the case of hoofd in ()). Hence, we may posit a subschema like (b), where the first constituent is idiomatic and lexically specified (reuze) and the second is a variable.

. E

.................................................................................................................................. In our view, CxM has a number of strong points. First of all, it is related to a wider framework, CxG, that is committed to exploring all aspects of language (including pragmatics, discourse factors, etc.) and to fruitfully merge ‘internal’ formal and semantic analyses with more ‘external’ factors like usage, processing, acquisition, language change, language evolution, and cognition in general, in accordance with what Jackendoff (b) calls the ‘graceful integration’ criterion.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Second, it allows us to account for both regular and irregular (or subregular) linguistic facts, thus embracing a ‘wide’ view of grammar that includes what Fillmore, Kay, and O’Connor (: ) and Pawley and Syder (: ) called ‘mini-grammars’. A cline of regularity/idiosyncrasy of linguistic structures is implied by this view, replacing the lexicon vs. grammar dichotomy. Third, whereas ‘in-between’ phenomena (MWEs, periphrases, affixoids, etc.) are not easily dealt with in strictly modular approaches to the grammar, they are straightforwardly accounted for in CxM. In fact, such cases are expected to occur, since no sequentiality is implied between the ‘modules of the grammar’ (which are embedded into the constructions in the form of simultaneously available levels of structure) and compositionality is not a requirement. Therefore, ‘syntax’ or ‘morphology’ as levels of analysis are rather an epiphenomenon of tendencies emerging from a much more complex and integrated system. This may seem to be in contradiction to the very existence of a Construction Morphology. However, when CxM claims that there are items, properties, and regularities specific to the level of the word (cf. §.), it is stating the epistemological validity of morphology as a domain of linguistic investigation with relative autonomy (albeit with fuzzy boundaries), not its ontological existence as a separate component of the grammar sequentially related to other components. In other words, what CxM claims is that ‘there are specifically morphological generalizations . . . that cannot be reduced to either syntax or phonology’ (Booij a: ). The difference between morphology and syntax lies in: (i) the kind of constructions they typically create (in terms of both form and meaning), including the kind of input units they use (e.g. stems or linking vowels are building blocks of morphological constructions but not of phrasal constructions) and (ii) the kind of hierarchies and relations they are typically found in. Another strong point of the theory is that it is inclusive in a number of senses. It accommodates both concatenative and non-concatenative (e.g. prosodic, templatic, subtractive) morphology, including paradigmatic word formation. In addition, it makes a point of handling both word formation and inflection (making an explicit connection to an abstractive Word-and-Paradigm kind of model), although, in general, it is undeniable that (so far) the theory has been better developed for word formation. CxM also has some weak points, which, in our opinion, require attention and additional work. Most, perhaps all, are due to the fact that the model is still fairly young. A first issue is the formalism: in particular, the representations of both meaning and phonological features need to be refined, and a detail of the features and primitives to be used within each layer of the constructions should be stated more explicitly (an attempt has been made in the context of Relational Morphology, see Jackendoff and Audring, Chapter  this volume, and Jackendoff and Audring forthcoming). The meaning level is especially in need for more precise tools of representation (cf. Booij and Masini ): which formalism should be used for semantic, pragmatic, and discourse-functional information? The field of semantics offers a number of alternatives that seem to be compatible with CxM, starting with Frame Semantics (which is the leading option in CxG) and Conceptual Semantics (Jackendoff a), but other solutions may be worth exploring. Some aspects that are traditionally part of morphology as a research domain, such as agreement, have not been tackled (yet) within CxM. The treatment of inflection in general needs to be refined, together with the role and status of paradigms in the theory. Also, the competition between constructions should be explored more thoroughly: since

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



morphology and syntax are taken as expressions of the same system, a certain degree of competition is expected to occur between morphological constructions and phrasal constructions (including MWEs). Another problematic aspect, though not only for this theory, is productivity. On the one hand, we need to specify how unification is constrained, thus preventing overgeneration. On the other hand, we need to specify how productivity is encoded within constructions (is it a by-product of formal and semantic constraints or do we need special principles?). The treatment of default override is also in need of more research, since it raises the question of how to make a principled distinction between absolute and defeasible properties, and how to model the override of properties with precision. Finally, a necessary next step would be to focus on intralinguistic variation—especially given the fact that CxM is an exemplar-based theory, which, as such, does not necessarily suggest a unified outcome of language acquisition—and cross-linguistic variation, which is currently given prominence in CxG too. What CxM can tell us about the extent to which natural languages may (or may not) vary in the formation, classification, and use of words is definitely a question that requires an answer. Along these lines, a desideratum for CxM would be to inquire into the nature of lexical categories, whose role—and even existence— is a matter of debate in theoretical linguistics: some models (e.g. Distributed Morphology) do not make use of lexical categories at all, and even some branches of CxG, such as Croft’s () Radical Construction Grammar, argue for the untenability of a crosslinguistically valid concept of ‘verb’, ‘noun’, etc.19 In principle, CxM argues for the usefulness of lexical categories in linguistic representation, at least at the language-specific level, but more research is needed to develop a full theory of lexical categories compatible with constructionist tenets.

. F  The basic ideas of CxM are contained in Booij’s (a) monograph and various outline articles (Booij , , , a). An online bibliography is maintained by Geert Booij, Jenny Audring, and Francesca Masini at: http://www.lilec.it/cxm.

A We are grateful to Geert Booij and Ray Jackendoff for inspiration and advice.

19

See Haspelmath () for a similar stance, although he does not deny the existence of languagespecific word classes.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

      ......................................................................................................................

    

. I

.................................................................................................................................. T Parallel Architecture (PA) (Jackendoff , , b) is a framework for understanding the organization of language and its place in the larger ecology of the human mind. It has met with considerable success in accounting for semantics, syntax, and their interaction (Jackendoff , ; Culicover and Jackendoff ). The present chapter sketches a morphological theory called Relational Morphology, based on the premises of the PA and drawing on the closely related approach of Construction Morphology (Booij a; Masini and Audring, Chapter  this volume). A fuller exposition appears in Jackendoff and Audring (forthcoming). A primary commitment of the PA is its thoroughgoing mentalism. The goal is to encode a speaker’s knowledge of language in a fashion that not only accounts for linguistic structure, but that also bears meaningfully on psycholinguistic concerns such as the structure of memory, the processes of language comprehension and production, and the character of language acquisition. The theory aspires to be accountable to all the facts, and not to be limited by a competence/performance or core/periphery distinction. Within morphology, this means that the theory must (a) encompass patterns of inflection, derivation, and compounding, from fully productive to incidental; (b) interact naturally with theories of syntax, semantics, and phonology; and (c) pay attention to issues in lexical processing and language acquisition. PA interprets the term ‘knowledge of language’ very literally—very psycholinguistically. ‘Knowledge’ implies something stored in memory. From this perspective, the fundamental questions of linguistic theory can be formulated as:

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



• What linguistic units does one store in memory, and in what form? • How are these units combined online to form novel utterances? • How are these units acquired? Modern linguistics in the broad generative tradition has for the most part stressed the second of these questions, von Humboldt’s often-cited ‘infinite use of finite means’ ( []: ). In particular, many approaches to morphological theory have been couched in terms of how to build up morphologically complex words from smaller pieces. Nevertheless, it is hardly news that many morphologically complex words are semantically and/or phonologically idiosyncratic, making it necessary to store them (or at least parts of them) in some form or another. For instance, since the musical performance reading of recital cannot be derived from the meaning of the verb recite, some version of it must be stored. Nevertheless, the relationship between recite, recital, and the affix ‑al is significant, and experimental results (e.g. Schreuder, Burani, and Baayen ; Diependaele, Sandra, and Grainger ) demonstrate that such relationships play a role in language processing. We conclude that morphological theory has to concern itself not just with the active generation of forms, but also with codifying the static relations among words and their constituents in memory. This is why we call this approach Relational Morphology (RM).

. T      P A

.................................................................................................................................. A basic tenet of the Parallel Architecture is that linguistic structure is determined by three independent systems of representation—phonology, syntax, and semantics—and by the linkages among them. This is not a new idea: similar conceptions appear in Stratificational Grammar (Lamb ), Lexical-Functional Grammar (Bresnan a, a), Autolexical Syntax (Sadock ), Role and Reference Grammar (Van Valin and LaPolla ), and others. The upshot is an architecture like Figure .. A well-formed sentence has well-formed structures in each of the three domains, plus well-formed links among the structures. How do words fit into Figure .? Within the Parallel Architecture, a word can be thought of as a small-scale interface rule, linking pieces of semantic, syntactic, and phonological structure, as in Figure .. Figure . is a stereotypical lexical item: a word with structure in all three components, together making up a Saussurean sign. However, the lexicon also contains items that lack one or more of the levels. For instance, hello, gosh, and oops arguably have no syntactic category: they can serve as full utterances and they combine only paratactically. Other Phonological structures

Interface

Syntactic structures

 .. The Parallel Architecture

Interface

Semantic structures

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



     Phonology: /kæt/

Syntax: n

Semantics: [CAT]

 .. A word in the Parallel Architecture Phrasal phonology

Phrasal syntax

Phrasal semantics

Word phonology

Morphosyntax

Word semantics

 .. Morphology in the Parallel Architecture

words, such as epenthetic it, complementizer that, and do-support do, are meaningless and serve only as grammatical ‘glue’ (and hence are not Saussurean signs). Idioms such as rock the boat (‘make trouble’) are another sort of nonstereotypical lexical item: they have internal syntactic structure but noncompositional semantics (and hence within this idiom, rock and boat are not signs).1 Where is morphology in Figure .? Traditional grammar treats morphology as a component of language distinct from phonology, syntax, and semantics. The Parallel Architecture suggests a different alignment. Instead of thinking of language as divided into four domains—phonology, morphology, syntax, and semantics—one should think of the architecture as a    matrix of components, as in Figure .. Here, the grammar of words runs in parallel with the grammar of phrases, each involving phonological, syntactic, and semantic levels, with interfaces running between them. (Such an arrangement is prefigured in Bach ; Ackema and Neeleman ; van der Hulst .) To understand these components and their relations, first consider morphosyntax—the internal syntactic structure of words. Derivational morphosyntax encodes the internal structure of complex words and the effects of word formation on syntactic category. Inflectional morphosyntax stipulates a language’s dimensions of inflection, creating an n-dimensional array of syntactic categories and their associated inflectional features, each with a range of values, for example gender, number, and case for nouns; tense, person, and number for verbs. In a syntactic structure, morphosyntax interfaces with phrasal syntax at the level of maximal X0s, which constitute the largest morphological entities and the smallest syntactic ones. This level includes inflectional features, which are visible to both morphology and phrasal syntax. Aside from X0 categories, morphosyntax and phrasal syntax do not share categories: affixes are found in morphosyntax but not phrasal syntax, while phrasal syntax has categories like NP, VP, and S, which with certain exceptions2 play no role in morphosyntax.

1 Here PA diverges from the most popular versions of Construction Grammar (e.g. Goldberg , ; Croft ; Boas and Sag ), in which every construction must be a sign, linking form (phonology and syntax) and function (semantics). In contrast, PA countenances meaning-free lexical items as well as meaningful ones. See Jackendoff  for discussion. 2 Exceptions include, for instance, the underlined parts of compounds such as smoked pork shoulder boiled dinner, health and welfare fund, and ‘I have a dream’ speech. Selkirk () has examples like willo’-the-wisp and ne’er-do-well; Di Sciullo and Williams () have French trompe l’oeil, boit-sans-soif, and English matter-of-fact-ly and stick-to-it-ive-ness. These can be treated as word-internal phrasal

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



Turning to the other components of Figure .: word phonology concerns the phonological shape of words, including such matters as phonotactics, word stress, and vowel harmony. Phrasal phonology concerns phenomena such as phrasal stress and intonation contours. The two intersect at the level of the phonological word. However, the phonological phenomena of greatest interest to morphological theory are the interface principles that link word phonology to morphosyntactic structure. Morphosyntactic constituents stereotypically map one-to-one to phonological constituents, such that each piece of morphosyntax has a corresponding string of sounds. This matches a conventional itemand-arrangement conception of morphology. All the difficulties with IA morphology—and all the fun and danger—lie in noncanonical mappings between morphosyntactic features and phonology, in which the phonological form of inflected or derived words cannot be split cleanly into identifiable morphemes. On the semantic side of Figure ., similar considerations obtain. ‘Word semantics’ specifies the possible semantic forms that words can take—the range of ‘lexical conceptual structures’, in the sense of Jackendoff ().3 What might be called ‘morphosemantics’ is the interface mapping between morphosyntactic structure and word meanings. Morphological patterns can be used to express a heterogeneous collection of semantic functions such as causation, intention, time, aspect, evidentiality, and social formality, but also on occasion semantic factors farther afield (see Talmy ; Bauer ; and for compounding, Jackendoff a). The morphosyntax–semantics interface is also responsible for the effects of morphological combination on argument structure. For example, event or process nominals such as abandonment preserve the argument structure of the corresponding verb, while agentive nominals like baker and result nominals like inscription denote one of the semantic arguments of the corresponding verb. Finally, word semantics canonically enters into phrasal semantics through the principle of compositionality. But there are also many noncanonical effects such as coercion, through which the meaning of a sentence can be more than the simple combination of its words, so-called ‘enriched composition’ (Jackendoff : –; Audring and Booij ). Overall, then, Relational Morphology sees the scope of morphology proper as encompassing morphosyntax plus its interfaces to phrasal syntax, word phonology, and word semantics. Of course, in order to understand an interface, one must understand both ends of what the interface connects. Hence the other components of the grammar cannot be neglected.

constituents, e.g. [Adv [NP matter of fact] [aff ly]], violating the canonical relation of morphosyntax and phrasal syntax. See Meibauer () for some discussion. Lieber (Chapter  this volume) characterizes lexical meaning as divided into a ‘semantic skeleton’, the part of meaning relevant to syntax, contrasting with what she calls the ‘semantic body’, “a less formal part that contains those encyclopedic elements of meaning that are of no syntactic relevance” §... One of the major arguments of Jackendoff (; see also Jackendoff , §§., .) is that such a division is impossible. Rather, within the PA, lexical conceptual structure is to be understood as the entire meaning of words, whatever that proves to be. Only certain aspects of this are accessed by the syntax–semantics interface, for instance argument structure and animacy. These are the parts ‘relevant to syntax’, but they are otherwise thoroughly integrated into the system of meaning, not a separate level or a separate kind of structure. 3

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

. P   

..................................................................................................................................

.. Schemas vs. rules An important tenet of PA and RM, shared with other theories such as LFG (Bresnan a), HPSG (Pollard and Sag ), Tree Adjoining Grammar (Joshi ), and Construction Grammar (including Construction Morphology), is that the grammar is stated entirely in terms of declarative patterns—which we call schemas—rather than in terms of procedural rules that apply in serial order to convert an ‘input’ into an ‘output’. All regularities in inflection and word-formation come to be stated in this fashion. For instance, the words in (a) and (b) are licensed by the schemas in (c) and (d). ()

a. Semantics: Syntax: Phonology:

[PLURAL (CAT)] [Npl N, ] /kæts/

b. Semantics: Syntax: Phonology:

[MAKE/BECOME (HARD)] [V A aff] /hardən/

c. Semantics: Syntax: Phonology:

[PLURAL (X)] [Npl N, ] / . . . s/

d. Semantics: Syntax: Phonology:

[MAKE/BECOME (X)] [V A aff] / . . . ən/

Schemas (c) and (d) have the same format as words, with the exception that they contain variables. Semantic variables are notated with capital letters, phonological variables are indicated by dots, and morphosyntactic variables are shown with their category label. Thanks to the shared format, schemas—like words—can be taken to be listed in the lexicon. Thus, as in Construction Grammar and Construction Morphology, both words and rules are stored pieces of linguistic structure, with complete continuity between them. A lexical item is more wordlike to the extent that its content is completely specific, such as the entry for cats in (a); it is more rule-like to the degree that it contains variables, like the schema for the plural in (c). Schemas and procedural rules are sometimes thought to be notational variants. However, differences emerge as soon as one looks a little more deeply. The first difference, of course, is that a theory based on procedural rules has two independent constructs—a lexicon and rules—whereas the constructional theory states words and schemas in a common format. As will be seen shortly, this is not just a difference in elegance. A second difference concerns how stored lexical items are combined to produce novel complex words and utterances. In procedural theories, the rules build structures by

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



applying in a determinate sequence, either from the top down or from the bottom up. In constructional theories, pieces of structure stored in the lexicon are ‘clipped together’ by the process of unification, such that variables in schemas come to be instantiated. There is no inherent order of derivation: structures can be assembled from the bottom up, from the top down, or from left to right. This free order of assembly in constraint-based and constructional theories lends itself to being directly implemented in contemporary opportunistic theories of language processing (Jackendoff , ; Sag ). A third difference concerns the matter of storage. The usual interpretation of generative rules is that they are the source of combinatoriality in language: they produce novel composite forms, with complete generality. Yet there is abundant evidence that many composite items in language cannot be constructed by the application of general rules. Examples include: • Words with only partial compositional semantics. For example, football has something to do with feet and balls, but its full semantics is highly idiosyncratic (and different on different continents) and has to be learned and stored. • Words with irregular phonological relations to their bases. One has to learn and somehow store the fact that the past tense of sing is sang, but the past tense of cling is clung. • Words with predictable meanings, but whose existence has to be registered in the lexicon. For instance, deadjectival verbs in ‑en, e.g. widen, brighten, redden, are semantically uniform: ‘make or become (more) A’. But not every predicted form exists, e.g. *louden, *stouten, *safen.4 So the existing forms have to be distinguished from the nonexistent ones. • Words with an identifiable affix but a non-word base (so-called bound roots or cranberry morphs), such as commotion (*commote) and impetuous (*impet). These are far from rare; for instance, hundreds of English adjectives in ‑ous are of this character. They cannot be built by rule from smaller parts, since the base is not an independently existing part. In addition, experimental results show that many composite forms, even regular ones, are stored (see, e.g., Giraudo and Grainger ; Andrews, Miller, and Rayner ; Libben ; Baayen, Wurm, and Aycock ; Kuperman et al.  for compounds; and Baayen, Dijkstra, and Schreuder ; Sereno and Jongman ; Baayen et al. ; Sandra and Fayol ; Baayen  for inflection). Consequently, a large number of complex lexical items must, for one reason or another, be stored in the lexicon. Traditional procedural rules are unprepared for such a situation. On one hand, they overgenerate: any rule that produces widen will also produce *louden, and any rule that produces sang will also produce *clang. On the other hand, they undergenerate: they cannot predict the idiosyncratic semantics of football, and they have no base from which to derive impetuous. Moreover, in a procedural theory of grammar, relegating an item or pattern to the lexicon often means that it is no longer interesting. As Spencer (a: ) puts it: ‘Much of the derivational morphology discussed in the literature is . . . of the occasional, accidental kind, and therefore of

4

This is true even taking into account the phonological constraints on the base: it must be monosyllabic and end with an obstruent.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

comparatively little interest to grammar writers (though it may be of interest to lexicographers, historians of language, psycholinguists, language teachers, and others).’ Section .. will show how a declarative theory based on schemas can offer a more inclusive model.

.. Lexical redundancy rules A widely accepted approach to the difficulties enumerated above is the Lexicalist Hypothesis (Chomsky , ): regular, predictable patterns belong to syntax, while more idiosyncratic patterns fall under the purview of ‘lexical redundancy rules’ or simply ‘lexical rules’, which apply before words are inserted into syntactic structures. This distinction became a foundational assumption of HPSG, LFG (where it was called Lexical Integrity), and many morphological theories (e.g. Aronoff ; Stump ; Anderson ; Spencer a). Depending on the theory, lexical rules can be viewed either as another layer of generative procedures, or else as establishing relations among stored items. The latter was the approach of Jackendoff (), which proposed an ancestral version of the schemas in, for instance, Bochner (), Booij (a), and the present framework. In RM, there is no occasion to posit separate lexical rules: all rules are ‘lexical’ in that they are pieces of linguistic structure containing variables, stored in the lexicon. Let us examine the relation between the adjective hard (a), the verb harden (b) (=(b)), and the schema (c) (=(d)) that captures the general relation that they instantiate. These are now enriched with a co-indexation notation which allows us to specify which parts of each entry correspond to each other and to parts of other entries. ()

a. Semantics: HARD1 Syntax: A1 Phonology: /hard/1

b. Semantics: [MAKE/BECOME (HARD1)]2 Syntax: [V A1 aff3]2 Phonology: /hard1 ən3/2

c. Semantics: [MAKE/BECOME (Xx)]y Syntax: [V Ax aff3]y Phonology: / . . . x ən3/y One should think of the co-indices not as self-standing ‘lexical indices’, but rather as marking the end of association lines: every subscript has to be paired with an identical subscript at the other end. In (), subscript  connects the three components of hard. But it also connects the same three components inside the structure of harden—and it connects the corresponding parts of the two words. Subscript  connects the three components of the whole word harden. Subscript  connects the affix’s phonology with its morphosyntax. Turning to the schema in (c), this says that English has verbs that (a) are based on an adjective with the meaning ‘X’; (b) have the affix /ən/ tacked onto the end of the base; and (c) express the notion ‘make or become (more) X’. The variable co-indices x and y allow this schema to be related to any word that has a corresponding pattern of co-indexation (e.g. harden and widen). On the other hand, the affix /ən/ itself is constant throughout all instances, so in schema (c) as well as in harden (b) it is encoded with a constant numerical co-index . Schema (c) is a generalization over existing words. As such, it expresses a relation among lexical items with corresponding structure. But it is not committed to productivity and it does not ‘generate’ its instances. Hence, it functions as a lexical redundancy rule.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



An important feature of this treatment is that schemas like (c) do not require any additional components in the theory: they are formally identical to productive schemas. Consider the English regular plural schema in (), adding the relevant co-indexation to (c) above. ()

Semantics: [PLURAL (Xv)]z Syntax: [N Nv, 6]z Phonology: / . . . v s6/z

The schemas in (c) and () are formally exactly parallel: they are pieces of phonological, syntactic, and semantic structure containing a variable on each level. The only major difference between them is that () is productive while (c) is not. The next question, then, is how the grammar makes this distinction.

.. Productivity An easy solution is to mark each schema with a feature for whether it is productive or not. This is the approach taken in Booij (a) (see also Masini and Audring, Chapter  this volume), and it echoes Lakoff ’s () distinction between major (i.e. productive) rules and minor (i.e. nonproductive) rules. A more fine-grained alternative is to mark productivity not on a schema as a whole, but on its variable, expressing the degree to which the variable is open to new lexical material (gradient productivity is assumed by, among others, Aronoff , Baayen and Lieber , and Baayen ; see also Hüning, Chapter  this volume and Bauer  for discussion). Marking productivity on the variable presents an interesting possibility: a schema could conceivably contain one productive (or open) variable and one nonproductive (or closed) variable. And indeed such schemas exist. For instance, English has four different patterns for naming geographical features, shown in (a–d). The italicized words name the type of geographical feature. ()

a. Arrowhead Lake, Loon Mountain, Wissahickon Creek, Laurel Hill, Sugar Island b. Mount Everest, Lake Michigan, Cape Cod c. the Indian Ocean, the Black Sea, the Hudson River, the White Mountains, the San Andreas Fault d. the Bay of Fundy, the Gulf of St. Lawrence, the Cape of Good Hope

The choice of name is completely productive: if we wish to name a mountain for Morris Halle, we have no hesitation in calling it Morris Mountain or Mount Morris. On the other hand, the variable for the type of geographical feature is not productive. One has to learn which words go in which patterns; for instance *Mountain Morris (pattern b) and *the Mountain Morris (pattern c) are impossible. Hence the schemas for these patterns have one variable of each type, as in (). Here the productive variable—the actual name—is notated with a double underline, and the nonproductive variable—the type of geographical feature—has a single underline; the and of in (c,d) are constants.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 ()

     a. b. c. d.

NN NN the N N the N of N

[e.g. Loon Lake] [e.g. Mount Washington] [e.g. the Lehigh River] [e.g. the Gulf of Mexico]

Such a situation cannot be encoded if the distinction between productive and nonproductive is marked on the schema as a whole. The patterns in ()–() offer another kind of evidence for the PA’s treatment of morphology and against a strict division between lexicon and grammar, as in the Lexicalist Hypothesis. On one hand, (a,b) look like compounds. For example, they can be preceded by adjectival modifiers: beautiful Arrowhead Lake, forbidding Mount Everest. On the other hand, (c,d) extend into the phrasal domain, because they have a determiner that can be followed by a modifying adjective: the majestic Hudson River, the dangerous Bay of Fundy. Pattern (d), moreover, has an of-phrase, characteristic of phrasal NP structure. The PA does not force us to form the first two ‘in the lexicon’ and the other two ‘in the grammar’, or to derive any of them from the others. Rather, all four of the schemas in () are in the lexicon: (a,b) are morphosyntactic; (c), in which the two nouns still form a compound, is a mixed morphosyntactic and phrasal schema; and (d) is purely phrasal.

.. Two functions of schemas The architecture laid out so far has three major advantages. First, having schemas and words in the same ‘place’ results in a network in which unproductive schemas can be linked to their instances without having to generate them. This allows for the relation to be partial, leaving room for idiosyncrasies such as the noncompositional semantics of recital, the irregular phonological relation of destroy and destruction, and the absence of a root as in impetuous. This solves the problems of undergeneration and overgeneration that we have found with traditional rules (§..). Second, productive schemas are also in the same ‘place’ as their instances, so they, too, can be related to stored words. For instance, compounding is productive, and speakers encounter new instances all the time without notice. But at the same time there are thousands of conventionalized compounds like football with idiosyncratic meanings, which are linked to the compound schema. Similarly, the productive schema for plurals links to lexically listed regular plurals such as trousers and glasses (‘spectacles’). Even the morphologically regular cats has to be listed (though without its semantics) as part of the idiom raining cats and dogs, while still falling under the plural schema. Furthermore, experimental evidence suggests that highly frequent regular plurals are likely to be stored rather than generated online (Baayen ; Nooteboom, Weerman, and Wijnen ),5 so that cats is more likely to be stored than, say, coelacanths. In each of these cases, the stored items fall under the very same schemas as items that are actively composed. The relational architecture provides a natural space for such phenomena. 5 Psycholinguists see evidence for even more prolific storage. De Vaan, Schreuder, and Baayen () offer evidence that regular complex forms may already leave a trace in long-term memory after just a single exposure.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



A third advantage of the present architecture is that the distinction between productive and nonproductive schemas is not so stark. The major difference lies in the openness of the variable and, consequently, in the functions schemas can have. Schemas with an open variable have a generative function, used in creating new expressions—the function that has been most prominent in much linguistic theorizing. But in addition, all schemas— productive or unproductive—have a relational function, in which they link to existing lexical items stored in the lexicon and capture generalizations among them. This leads to a notable change of perspective: one can think of productive schemas as just like nonproductive ones, except that in addition, they allow online instantiation of their variables. In other words, productive schemas are ‘ordinary’ schemas that have ‘gone viral’.6 Such a view is possible only in a constructional framework like Construction Grammar/Morphology or the PA/RM. And this turns on its head the notion that the study of language should focus primarily on productive phenomena. Rather, the focus should be equally if not more on the relationships among items in the lexicon, that is, on the texture of linguistic knowledge.

.. Are nonproductive schemas necessary? We take it as given that productive schemas are necessary in order to account for the construction of novel morphological forms. However, an important question for morphological theory is whether there actually are such things as nonproductive schemas. We have just pointed out some interesting consequences of assuming that they are necessary. However, this assumption is not universal. For instance, Pinker and colleagues (e.g. Pinker and Prince ; Pinker ), claim that there are rules (in our terms, schemas) for productive morphological patterns such as the English regular past tense, but that nonproductive patterns such as English ablaut past tenses are only a matter of association and analogy. There certainly are cases of lexical relations that do not fall under a larger pattern and which might therefore just be matters of association. For instance, the pairs in () appear to be unique in their phonological relations to each other; there is no more general rule or schema that relates them. ()

a. b. c. d. e. f.

bomb/bombard hate/hatred laugh/laughter humble/humility bequeath/bequest Glasgow/Glaswegian

A more questionable case is horror/horrify/horrid/horrible/horrific/horrendous. The fact that these six words form a family might warrant a nonproductive schema for horr-, but the small size of the family might speak against it. (Marchand : – shares our doubt.) 6 Could there also be schemas that have only the generative function and lack the relational function? No, because one can always store newly generated or encountered instances of a schema, which then fall under the schema’s relational function.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

On the other hand, schemas seem intuitively far more attractive for nonproductive patterns like [V A ‑en], with about fifty instances, and [N V ‑ion], with hundreds or thousands. Ultimately, we think, the question of whether there are schemas for particular cases is probably to be settled by psycholinguistics, not by morphological theory. We might even find that individuals differ in how analytically they treat the language, and in how eager they are to form schemas (unconsciously, of course).7 One reason for positing nonproductive schemas comes from language acquisition. An essential part of this process is constructing (or discovering) the productive rules of the language, on the basis of primary linguistic input. Roughly, the learner’s procedure must involve observing some collection of words with similar phonological and semantic structure, and formulating a hypothesis about the general pattern they instantiate (Culicover and Nowak ; Tomasello ). What is the form of such a hypothesis? Within the PA/RM and other constructionist theories, a hypothesis can be stated in the form of a tentative schema; its constants reflect the similarities among the words, and its variables reflect the differences among them. However, learners have no way of knowing in advance whether a pattern they observe— and hence the hypothesized schema—will be fully productive or not. So they will inevitably create a lot of schemas that fail the criteria for productivity (whatever these criteria may be—see, for instance, Baayen ; Yang ; O’Donnell ). What happens to failed schemas? If the brain does not countenance nonproductive schemas, these hypotheses have to be expunged. But we see little reason for the brain to throw out information about linguistic patterns if it can be useful. (For more discussion, see §. and Jackendoff and Audring forthcoming.) In RM’s view of schemas, productive and nonproductive schemas are in exactly the same format. If a learner extracts a pattern as a schema, it might or might not be a productive one, and the next job is to determine whether this schema is productive or not. This is not a transcendental distinction between a rule ‘in the grammar’ and one ‘in the lexicon’, as in the Lexicalist Hypothesis, or between Rule and No Rule, as in Pinker’s approach. It is just a matter of determining the proper diacritic on the schema’s variable—open vs. closed. Formally, this is a relatively small and local issue. Moreover, if a schema turns out to be nonproductive, this does not mean it is flat-out wrong. The observed pattern among the observed instances may still remain valid. And if it so happens that a schema is found to be productive, it still does not have to relinquish its status as a lexical redundancy rule. Rather, as suggested in §.., it retains the function of capturing generalizations among stored items. One way that nonproductive schemas can be useful is as an aid in the acquisition of new instances. When one encounters a new word, one presumably seeks patterns into which it fits. Without schemas, there are endless ways a new word can be similar to existing words, along one dimension with one word (e.g. initial syllable), another dimension with another word (meaning), a third dimension with a third word (final syllable), and so on. A schema codifies dimensions of similarity that have been found significant, in effect ‘precompiling’ the similarities among all its instances.

7 For an experimental study of individual variation in the systematicity of compounding, see Gleitman and Gleitman (). Pinker () discusses differences in storage versus computation of English past tenses across different neurological populations.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



. F  

..................................................................................................................................

.. Inheritance with impoverished entries and full entries As indicated by the name, Relational Morphology puts a strong emphasis on the relations between lexical items. We now begin to dig a little deeper into the meaning of our formalization. A common position (e.g. Booij a) is that complex words, their bases and their schemas are related through an inheritance hierarchy, as illustrated in (). The items lower in the hierarchy are taken to inherit from items higher in the hierarchy to which they are connected. Thus harden inherits from hard, whiten inherits from white, and both inherit from the schema. ()

[V A-en]

ahard

[V ahard en]

awhite

[V awhite en]

Inheritance is especially attractive because it has also been frequently invoked in the organization of concepts. Hence it is a domain-general theoretical construct that requires no special machinery for language alone (Murphy ). But what does inheritance mean? What do the lines signify? We will explore two interpretations of inheritance, rejecting one and developing a more flexible version of the other. A common interpretation of inheritance (e.g. Collins and Quillian ; Pollard and Sag ; and in a sense Chomsky and Halle ; see also Brown, Chapter  this volume) is that the lexicon is maximally economical: any information present in a higher node of the hierarchy does not have to be specified in a lower node. For instance, harden inherits everything from the two higher nodes and therefore can be listed like this: ()

ahard

[V A-en]

[..] The base of a word like tremendous is not listed in the lexicon, so it inherits only from the schema for the affix and has to list the rest, as in (). ()

[a N-ous]

[tremend.]

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Following Jackendoff (), we call this the impoverished entry theory: the idea is that lexical items contain only information that cannot be inherited from elsewhere. Despite its intuitive appeal, there are numerous reasons to reject this position. We will mention two (others appear in Jackendoff  and Jackendoff and Audring forthcoming). First, it implies that in order to retrieve a complex item such as harden from long-term memory, one must fill in its missing content from its superordinates in the hierarchy. Hence a complex item should always take longer to process than its base. However, experimental evidence proves this prediction false. For example, Baayen, Dijkstra, and Schreuder () report that if a Dutch noun stem (appearing in either singular, plural, or diminutive) is high frequency, and if in addition the plural is more frequent than the singular, then the plural is accessed as quickly as the singular—not more slowly, as the impoverished entry theory would predict. Second, consider the theory of acquisition. In order to construct a new schema, one must generalize over existing lexical items whose details are present in memory. The impoverished entry theory has to claim that once the schema is established, the redundant details of all the instances used to establish it are erased from memory, in order to optimize the lexicon. We find this implausible (though psycholinguistic evidence might prove us wrong). Similarly, in the course of acquiring a new complex word, one must first discover its details, and only then determine what schemas it falls under and what its base is. The impoverished entry theory entails that as soon as one establishes the new word’s relation to a base and to one or more schemas, all the redundant features are immediately expunged. Again, we find this implausible. (Similar arguments can be found in Langacker  and Booij b.) The basic difficulty with the impoverished entry theory is that it assumes there to be a premium on economy and elegance, sometimes formalized as ‘minimum description length’. Our question is whether economy is the right criterion when it comes to storage in the brain. A plausible alternative is that the brain embraces redundancy, at least up to a point. For instance, languages seem to have no problem marking thematic roles redundantly, through word order, case marking, and verb agreement. Given these precedents, we find it attractive to consider a full-entry theory, in which lexical items are encoded in their entirety, even where redundant.8 For instance, the fullentry theory says that the lexical entry of harden is like (b) (repeated in (b) below), with its full structure, rather than like (), in which it has been evacuated of content. The role of a schema, then, is not to permit material to be omitted from other entries, but to confirm or codify or motivate generalizations among lexical entries. The notion of motivation is invoked widely in morphology, for example Booij (a, b), Cognitive Grammar (Lakoff ; Goldberg ; Radden and Panther ), and indeed by de Saussure ([]: ).

We acknowledge that there are important questions about what it means to code an item ‘in its entirety’. Does that include phonetic detail? Does it include semantic detail that might be termed ‘realworld knowledge’? We must leave these questions open. However, we do not subscribe to an exemplartheoretic view in which one stores all experienced tokens in full detail and makes no abstractions. 8

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



Relational Morphology fleshes out the notion of motivation in terms of shared structure. To illustrate, we return to our treatment of harden, repeated here. ()

a. Semantics: HARD1 Syntax: A1 Phonology: /hard/1

b. Semantics: [MAKE/BECOME (HARD1)]2 Syntax: [V A1 aff3 ]2 Phonology: /hard1 ən3 /2

c. Semantics: [MAKE/BECOME (Xx)]y Syntax: [V Ax aff3 ]y Phonology: / . . . x ən3 /y The shared structure is encoded in the co-indices. Subscript  ties together the semantics, syntax, and phonology of the word hard in (a). It also ties together parallel constituents of harden in (b). At the same time, though, it links these constituents of (b) to the corresponding parts of (a), thereby identifying the structure shared between the two entries. Similarly, subscript  picks out structure shared between harden and schema (c): the affix and its phonological realization. Finally, as suggested above, the variable subscripts x and y in (c) indicate structure shared with anything that has parallel structure—that is, all the instances of the schema. An immediate advantage of the notation in () over standard notations for inheritance such as () is that it enables us to pinpoint the regions of similarity among items.

.. Inheritance without inherent directionality However, inheritance as usually conceived—even within full-entry approaches—is not general enough for the full range of morphological relations. First, inheritance is usually considered to be asymmetrical and top-down, such that the general fills in the particular: a complex item such as harden inherits from its base (hard) and the schema [VA ‑en]. However, as we observed above, schemas have to be acquired from the bottom up, by generalizing over some collection of previously stored examples. Hence in a sense, the instances motivate the schema, rather than (or in addition to) the other way around. This conundrum does not arise with the co-indexing notation in (), which is not inherently directional. We do not have to decide whether (c) is motivated by (b) or vice versa: the schema and its instances mutually motivate each other. A further problem occurs with pairs like assassin/assassinate. Phonologically, assassinate is clearly built on the base assassin and should inherit from it. But semantically the relation goes the other way: an assassin is someone who assassinates people. If the phonological relation actually mirrored the semantics, we would get an ordinary agentive nominal *assassinator. Similarly for linguist/linguistics: in the standard version of inheritance, linguist would be the ancestor of linguistics on the phonological level, but its descendant on the semantic level. Example () shows how such cases can be treated in the co-indexing notation. The semantics of (b), co-indexed , is shared with part of (a), while the phonology of (a), co-indexed , is shared with part of (b). (Co-index , which identifies the affix, is shared with the ‑ate schema, not shown here.) Beyond this, there is no need to say which word is derived from which.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

() a. Semantics: [PERSON WHO [MURDERS POLITICIAN]8 ]7 Syntax: N7 Phonology: /əsæsən/7 b. Semantics: [MURDER POLITICIAN]8 Syntax: [V N7 aff9 ]8 Phonology: / /əsæsən/7 /eyt/9 /8 In these sorts of examples, the PA notation is invaluable, because it makes it possible to correlate phonology, syntax, and semantics independently.

.. A-morphousness Next consider a word like gorgeous, which has a legitimate affix attached to a non-word base. Like all words with the affix ‑ous, this word is an adjective. But since the base is not a word, it has no syntactic category. There are hundreds of such ‑ous words (e.g. scrumptious, curious, impetuous, supercilious, meticulous), in addition to those with a genuine noun base, such as joyous. In RM, we can encode gorgeous as (a), where the morphosyntax of the base is blank: this part of the word is in effect unparsed in morphosyntax, although it does have phonology. Example (b) offers a schema that encompasses both lexical and nonlexical bases; the angle brackets < > indicate optional specifications on the variable. () a. Semantics: BEAUTIFUL10 Syntax: [A— aff11 ]10 Phonology: /gordʒ əs11 /10

b. [Property ]w [A aff11 ]w / . . . əs11 /w

Since gorge- is not represented in morphosyntax, one might consider gorgeous to be partially ‘a-morphous’ in the sense of Anderson (). The words in () present a more extreme case. ()

million, billion, trillion, . . . ; zillion, godzillion, kajillion

While the pattern is easily recognizable and can be creatively extended, as in the last three examples, the morphological status of ‑illion is questionable. It cannot be a suffix, as that would imply that m-, b-, z- and so on are bound roots, many of which fail to meet the phonological requirements for a root. The alternative, treating ‑illion as a root and the consonants as prefixes, would imply a collection of nonce prefixes that do not occur anywhere else in the language. Instead of settling for either of these unsatisfactory analyses, we can say that these particular words are completely a-morphous. The phonological string ‑illion is perhaps associated with a meaning ‘very large number,’ but there is no morphosyntactic category such as Affix or Noun associated with this meaning. Example (a) shows an entry for trillion and (b) is a possible ‑illion schema. ()

a. Semantics: 1212 Syntax: Numeral12 Phonology: / /tr/ /illion/13 /12

b. Semantics: LARGE NUMBERx Syntax: Numeralx Phonology: // . . . / /illion/13 /x

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



Example () treats trillion as simply a morphosyntactic Numeral with no internal morphosyntactic structure. At the same time, the phonology and semantics are correlated, and the schema is available for coining jocular number words such as kajillion. So here is a fully a-morphous example in Anderson’s sense. A similar solution can be applied to phonaesthemic patterns such as the gl- in glimmer, glow, gleam, and so on. Anderson () advocates the position that morphologically complex items have no internal structure. We are urging a more nuanced view: items like harden and joyous do have internal morphosyntactic structure, but items like gorgeous have only partial morphosyntactic structure, and items like trillion lack it altogether. Nevertheless, all of them can be partially motivated by schemas. This heterogeneous view of morphosyntactic complexity is of a piece with RM’s overall outlook on the character of the lexicon.

.. Sister relations The non-word bases in the previous section bring us to another case of motivation that poses a serious problem for inheritance. Consider pairs like ambition/ambitious and cognition/cognitive, which are clearly related, but for which there are no lexical bases *ambi(t) and *cogni(t) that both members of the pair can inherit from. The standard construal of inheritance requires us to say that one of the pair is basic and the other ‘derived’. Yet there is no clear way to say whether ambition inherits from ambitious or vice versa. We would prefer simply to say that they share structure and are related as equal ‘sisters’, without a hypothetical bound root as the ‘mother’ that ties them together. The co-indexing notation makes it straightforward to express such sister relations, for instance as in (). The sisters share part of their semantics (subscript ) and part of their phonology (subscript ). But they do not share their affixes: subscript  is co-indexed with the ‑tion schema (not shown) and subscript  with the ‑tious schema (an allomorph of the ‑ous schema in (b)). Finally, since *ambi is not a word and has no part of speech, we leave its morphosyntax blank, as we did for gorgeous in (a). Crucially, unlike hard/harden (a,b), neither entry is contained in the other, and again we do not need to decide which is derived from the other. This is what makes them sisters rather than a standard mother– daughter inheritance configuration. ()

a. Semantics: DESIRE14 b. Semantics: [HAVING (DESIRE14)]15 Syntax: [N — aff16 ]14 Syntax: [A — aff17 ]15 Phonology: / /æmbɪ/18 /ʃən/16 /14 Phonology: / /æmbɪ/18 /ʃəs/17 /15

.. Sister relations among schemas As mentioned in §.., some sister relations, such as bomb/bombard, appear to be one-off pairings. Others are more systematic. A case discussed by Booij (a) is the pairing of names of ideologies with the names of their adherents, such as in (): ()

a. b. c. d.

pacifism/pacifist altruism/altruist solipsism/solipsist impressionism/impressionist

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

To express this generalization, we have to establish a relation not only between two sister words as in (), but also between two sister schemas: the ‑ism schema for ideologies and the ‑ist schema for their adherents. Booij (a: –; Booij and Masini ) calls this configuration a ‘second-order schema’ (see also Nesset ; Kapatsinski ; Masini and Audring, Chapter  this volume); we call it sister schemas. While the notion has been developed elsewhere, RM offers a more precise way to notate the relations. The key lies in binding variables across two or more schemas, so that the same material instantiates both. The links between such variables are notated with Greek letters, as in (). ()

a. Semantics: IDEOLOGYβ b. Semantics: [ADHERENT (IDEOLOGYβ )]z Syntax: [N — aff19 ]β Syntax: [N — aff20 ]z Phonology: / / . . . /α /ɪzm̥/19 /β Phonology: / / . . . /α /ɪst/20 /z

Let us unpack this: (a) says there can be nouns that denote an ideology and that end with the affix ‑ism (co-index ). The variable co-index β ties the three components together; (b) says that there can be nouns that denote the adherents of an ideology and that end with the affix ‑ist (co-index ); the variable co-index z ties these three components together. So far these schemas are just like the ‑ous schema in (b). But what makes them more interesting is that IDEOLOGY in (a) is bound to IDEOLOGY in (b) by the Greek co-index β, and the part of the phonology in (a) that precedes the affix is linked to the corresponding part in (b) by the Greek co-index α. Thus, this pair of schemas together says that for any noun ending in ‑ism that denotes an ideology, it should not be surprising to find another noun ending in ‑ist that is phonologically the same up to the affix (co-index α), and that denotes an adherent of that very ideology (co-index β)—and, since co-indexation is nondirectional, vice versa as well. Booij notates sister schemas as a special relation between schemas (see also Masini and Audring, Chapter  this volume): ()

< Schema A >  < Schema B >

In the present approach, sister schemas are a generalization of sister relations between words. Where the sister relation in () links parts of two words with numerical co-indices, the second-order schema in () links variables across two schemas with Greek letter coindices. Thus the present notation achieves a somewhat greater degree of generality, and further brings out the continuity between words and rules. As in Construction Morphology, the notion of sister schemas can be fruitfully applied to any paradigmatic relation, such as patterns of inflection, stem and affix allomorphy, and truncation.

. S  

.................................................................................................................................. We have explored here the outlook of Relational Morphology, a theory of morphology grounded in the framework of the Parallel Architecture. The fundamental goal is to

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



describe what a language user stores in memory and in what form, and to describe how this knowledge is put to use in constructing and comprehending novel utterances. A basic tenet of RM, following PA, is that knowledge of language is segregated into phonological, syntactic, and semantic/conceptual structures, plus interfaces between them that enable sound, morphosyntactic structure, and meaning to be related to each other. Words function as small-scale interface rules, establishing links among pieces of structure in the three domains. Within this outlook, morphology emerges as the grammar of word-size pieces of structure and their constituents, comprising morphosyntax and its interfaces to word phonology, lexical semantics, and phrasal syntax. Canonical morphology effects a straightforward mapping between these components; irregular morphology is predominantly a matter of non-canonical mapping between constituents of morphosyntax and phonology. As in Construction Morphology, RM encodes rules of grammar as schemas: pieces of linguistic structure containing variables, but otherwise in the same format as words—that is, the grammar is part of the lexicon. Hence there is no principled distinction between the formalisms for words and for rules, aside from the presence or absence of variables—a simplification of the repertoire of theoretical constructs. Productive schemas serve two functions. In their generative function, they are used to build novel utterances by ‘clipping’ pieces of structure together, one piece instantiating a variable in the other through unification. In their relational function, they serve to motivate relations among items stored in the lexicon. Morphological patterns that are not productive can also be described in terms of schemas. Such schemas are formally parallel to those for productive patterns, except that they have only the relational function. We have argued that there is no principled distinction between these two sorts of schemas, aside from a diacritic on their variables expressing their degree of openness—again a simplification of the theoretical apparatus. Because all schemas participate in the relational function, we conclude that morphological theory should focus on expressing lexical relations at least as much as on the online construction of novel forms. Such a focus reveals the lexicon to be richly textured, not the unstructured list that many linguists have made it out to be. Finally, we have addressed how lexical relations are to be expressed. Beginning with the well-known mechanism of inheritance, we have shown that an impoverished-entry theory of inheritance is inappropriate for a variety of reasons, and that a full-entry theory is more satisfactory. However, the full-entry theory itself is not general enough: it is unable to deal with lexical relations that are nondirectional, multidirectional, or symmetrical. We have proposed to express motivation (a generalized form of inheritance) in terms of shared structure, and we have introduced a notation that enables us to flexibly pinpoint the regions of commonality between pairs of words, between words and schemas, and between pairs of schemas. The challenge for this theory is to apply it to the full range of issues investigated in current morphological theorizing. Many representative phenomena are addressed in Jackendoff and Audring (forthcoming), for instance zero morphology, stem allomorphy, blends, truncations, and inflectional classes. However, even in the present chapter, we have been able to touch on some telling phenomena that have not been prominent in the literature, using a minimum of theoretical machinery. More broadly, we believe that the way the issues have been couched here, emphasizing the theory’s consistency with the larger framework of the Parallel Architecture, can help build bridges between linguistic theory, psycholinguistics, and cognitive neuroscience.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

At a still broader level, morphological theory as developed here can be construed as a theory of a richly textured domain of human memory, namely memory for words and their relations. It is of interest to ask whether other domains of memory, such as knowledge of physical objects, knowledge of music, and knowledge of social relations, display a similar profile. We conjecture that parallels are to be found (Jackendoff and Audring forthcoming); following up on them is a challenge for the future that can help elucidate the deep connection of language to the rest of the mind.

A We are delighted to thank Geert Booij and Jay Keyser for much discussion of this material and Peter Culicover for voluminous comments on an earlier version. We are especially grateful to Peter Hagoort and the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands for RJ’s opportunity to spend time as a Visiting Fellow in the winters of , , and , during which time we were able to develop much of the work reported here. RJ also thanks participants in the Tufts Linguistic Research Seminar (Ari Goldberg, Neil Cohn, Eva Wittenberg, Anita Peti-Stantić, Naomi Caselli, Anastasia Smirnova, Rabia Ergin, the late Irit Meir, Rob Truswell, and Katya Pertsova), who endured numerous barely formed versions of this material over a period of several years. JA is grateful to the Dutch national research organization NWO for a Veni grant, #--.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

  ......................................................................................................................

 

. I

.................................................................................................................................. C Typology is a methodological framework for conducting typological research in which descriptive categories and theoretical concepts are deconstructed into fine-grained parameters of typological variation. Like other multivariate approaches to cross-linguistic research (Haspelmath ; Hyman b; Bickel , ), Canonical Typology utilizes observations on a large number of empirically motivated variables to gauge the similarities and differences between linguistic structures (within or across languages).1 The method is distinguished from other contemporary approaches to typology by its appeal to the notion of the canon, a logically motivated archetype from which attested and unattested patterns are calibrated. While not a theory of morphology, Canonical Typology was first developed as a means to systematically analyse morphosyntactic and morphological phenomena, such as agreement and inflection (Corbett a, b, , , , ). It has proven to be especially adept as a tool to evaluate linguistic constructs and to elucidate the variation encountered in different morphological systems. More recently, it has been evoked as a means to characterize morphology that deviates from default relationships theorized in Paradigm Function Morphology (Stump ; Stump and Finkel ) and as a method for analysing a range of complex syntactico-semantic phenomena (see papers in Brown, Chumakina, and Corbett ). In theory, the canonical approach to typology is compatible with any model of grammar in which variation can be decomposed into fine-grained variables. In practice, however, (inflectional) morphology is assumed to be an autonomous component of grammar by most proponents of Canonical Typology.2 Consequently, the morphological component of grammar is usually understood to be inferential-realizational

1

See Forker () for an explicit comparison of multivariate typology and Canonical Typology. The autonomy of morphology from syntax is well supported in the work of the major proponents of Canonical Typology, since much of this work demonstrates that inflectional morphology is characterized by its own rules and principles. It is less obvious if the same scholars think derivational morphology should be considered to be autonomous from the lexicon. 2

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

(according to Stump’s  typology) and morphological processes are concerned with morphological exponents, not morphemes. The role of features is central to the conception of the framework and these are commonly distinguished with respect to whether they participate in morphosyntactic processes (e.g. agreement), whether they are morphosemantic features that can only indicate semantic content (e.g. tense) or whether they are strictly morphology-internal (e.g. inflectional class).3 While much has been written on Canonical Typology, explicit descriptions of its methodology or the fundamental stages of its application are often open to interpretation. Here I deconstruct the framework by providing a stepwise introduction to the principles that can be used to identify the canon and the possible types associated with a particular linguistic domain. The first aim of the chapter is to review what it means to be ‘canonical’ and to provide guidance on how ‘canonical values’ can be identified (§.). The second aim is to provide an overview of the insights the method has already provided for inflectional and derivational morphology (§.). Having looked at how Canonical Typology has been employed to analyse what it means to be a ‘possible word’, we will turn to where it might be headed, and how it might develop as more morphological phenomena are investigated using the framework (§.).

. E  

.................................................................................................................................. The types of categories and concepts linguists use to describe and compare languages frequently share the same label, but seldom share an identical set of characteristics. This variation can result from differences in descriptive tradition (e.g. gender vs. noun class), theoretical approach (e.g. agreement in Minimalism vs. HPSG vs. LFG), or genuine linguistic diversity. Canonical Typology attempts to alleviate issues of categorization by taking a bird’s eye view of a given linguistic phenomenon in all its dimensions. In doing so it provides a mechanism for demonstrating (i) how linguistic structures can differ from one another along a range of parameters; and (ii) how different (ontological or theoretical) categorizations of a phenomenon may map to these parameters. A central concept in this approach to typology is the , a reference point from which to compare linguistic objects and descriptions. In Canonical Typology, the canon associated with a particular linguistic phenomenon (such as agreement, suppletion, or gender) is a logically motivated archetype whose properties are determined through the application of the  . Once the properties of the canon have been established, it is used as a reference point for describing real and hypothesized linguistic structures associated with that phenomenon. For example, a particular instance of agreement can be described as being canonical (or non-canonical) along a series of parameters. This allows us to assess how closely it resembles   (Corbett b, ) and compare instances

See Corbett and Baerman (), Corbett (, ), and Kibort and Corbett () for discussion of different types of features. 3

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



of agreement in a principled way. The method pulls apart tangled problems to allow differences to be seen more clearly. It ensures linguists are talking about the same thing when comparing languages or structures, and avoids problems associated with terminology by placing cross-linguistic variation at the centre of analysis and description. So when we speak of  ,  , or  , we are referring to a theoretical construct, defined by a set of logically compatible characteristics, that belongs to a specific notional domain. Since the canon is only a construct, a real-life exemplar of it may not exist. The process of identifying the canon for a given domain—and crucially which properties define the canon—can be characterized by four stages, each of which will be described in turn.

.. Identification of the domain Canonical Typology is used to map variation encountered within a specific notional domain, typically one that is already established in linguistic description. A range of different morphological or morphosyntactic domains have been investigated from a canonical typological perspective so far, most notably agreement (Comrie ; Corbett b, ; Evans b; Polinsky ; Suthar ; Cysouw ; Cormier, Schembri, and Woll ; Luraghi ; Palancar ; Costello ), morphosyntactic features and their values (Corbett , ; Van de Velde ; Corbett and Fedden ; Round and Corbett ), and a wide range of topics related to inflectional and derivational morphology (see §. for references and discussion).4 While seldom explicitly referred to in applications of the methodology, the formal identification of the domain of investigation is an essential part of conducting Canonical Typology. A domain is always at least implicit in applications of the framework because it delimits what types of linguistic structures should be considered as possible examples of a phenomenon. Bond (: –) argues that suitable domains for investigation can be explicitly defined by characterizing a broad and minimal relation between two or more linguistic elements. This process provides a - or , which sets out the domain of typological investigation, and therefore limits what properties must be associated with a structure for it to be considered part of the study. By way of example,

4 Since Canonical Typology is a specific framework for carrying out cross-linguistic research, it has been employed to discuss a wide range of topics. Some of these are related to properties of the verb or predicate, including expressive affixes (Fortin ), finiteness (Nikolaeva ), negation (Bond ), and reality status (Michael ). It has also been used for topics related to reference tracking and argument structure such as quotation (Evans ), reflexives (Everaert ), passives (Siewierska and Bakker ), direct/inverse systems (Jacques and Antonov ), as well as the status of categories and relations in (morpho)syntax and semantics such as mixed categories (Nikolaeva and Spencer ), possession and modification (Nikolaeva and Spencer ), the argument/adjunct distinction (Forker ), and gender and classifiers (Corbett and Fedden ; Fedden and Corbett ; Corbett, Fedden, and Finkel ; Audring ). The relationship between Canonical Typology and ontologies of description is discussed in Farrar ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

the following base-definition for the agreement domain sets out the fundamental properties that characterize all examples of agreement (Bond : ; see also Corbett  and Steele ): Agreement Domain: For the elements X and Y to be in an agreement relation, there must be a systematic covariance between a semantic or formal property of one element and a formal property of another. The agreement domain contains any relationships that exhibit these properties.

An explicit characterization of the domain ensures that sets of linguistic structures like those in () can be included in the typology since varying the number value of the subject element affects the formal properties of a second element, the verb. It is only by looking at two (or more) structures in parallel that this relation can be revealed. ()

a. The dog barks every night. b. The dogs bark every night.

Equally, the base-definition for the agreement domain ensures that structures like those in () are not included in the domain of investigation, and thus do not influence the application of the method. In this pair of examples, varying the semantic properties of the subject does not affect the form of any other element in the clause. ()

a. The dog barked every night. b. The dogs barked every night.

The base is designed to capture only essential information about the domain investigated; yet this sometimes means making a principled decision about how to limit an investigation before a full understanding of the phenomenon is reached. Consider the following basedefinition for the inflection domain: Inflection Domain: For a form to be an inflected one, it must bear a paradigmatic relationship to another form with the same lexico-semantic content. The inflection domain contains any sets of forms that exhibit this type of relationship.

The need for this formulation should be clear. It ensures that our typology is limited to a clearly defined domain of morphology, but at a slight cost. It guarantees that the relationship between singular dog and plural dogs belongs to the inflection domain (even though dog itself does not bear any inflectional exponent). It equally ensures that the relationship between dog and dogged (which are not forms of the same lexeme) does not. This is achieved by explicitly stating that inflectional morphology concerns intra-lexemic not inter-lexemic relationships, and thus relies on an independent process for identifying lexemes. It also relies on an explicit notion of a paradigmatic relationship (see §..). Once a domain has been identified, and a base-definition mooted, it is possible to begin mapping out parameters of variation for that domain.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



.. Identification of parameters of variation If two linguistic structures (such as word-forms or sets of word-forms) in a given domain (such as the inflection domain or the agreement domain), are similar to each other, they will be identical along some (set of) parameter(s), yet differ in one (or more) other ways. In Canonical Typology (and typology in general), these parameters of variation provide the empirical base for establishing which variables are important when mapping out possible differences between languages and structures. Before we identify what these parameters are for inflection (introduced in §..), first consider the paradigm—a construct central to the conception of inflectional morphology. In its most basic conception, a paradigm defines the relationships between a set of forms that are manifestations of the same lexeme, but differ according to their featural specification. Consequently, in the simplest possible paradigm, a series of two forms vary according to their specification for mutually exclusive values of a single grammatical feature. For instance, this is the case for the majority of nouns in English, which exhibit a distinction between singular and plural values of the number feature, as illustrated with English cat in (). It is clear, however, that this distinction does not extend to all nouns in English. Abstract nouns like health do not have plural forms (e.g. *healths) and such lexical items do not meet the base-definition of the inflection domain in §... ()

A paradigm with a single feature ‘cat’ 



cat

cats

Paradigms become more interesting when they are constructed from two or more orthogonal features. For instance, the paradigm in () is constructed from intersecting values for two agreement features, namely number and gender, giving rise to four distinct forms for the Spanish adjective alto ‘tall’. ()

A paradigm with forms distinguished by two intersecting features Spanish alto ‘tall’ 





alto

altos



alta

altas

Each cell of the paradigm is populated by a unique form, providing strong evidence for distinguishing four feature values for this particular class of adjectives in Spanish. However, not all adjectives show this pattern. Many adjectives agree in number, but not in gender, demonstrating that this distinction is only relevant for some lexical items, as shown in () for the Spanish adjective inteligente ‘intelligent’.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 ()

  A paradigm with forms distinguished by a single feature Spanish inteligente ‘intelligent’ 



inteligente inteligentes In Spanish, values for gender and number intersect to determine unique forms for some adjectives, but not for others. In other languages, this sort of pattern extends across every member of a distributional class, such that the intersection of values is only relevant for determining a part of the paradigm. For instance, in Macedonian (Indo-European, Slavonic), typical adjectives can agree in number, gender, and definiteness (Baerman, Brown, and Corbett ), giving rise to the paradigm structure in (). While all of the forms in the paradigm are unique, the gender distinction is only relevant in the singular, leading to  within the plural. ()

A paradigm with three intersecting features and syncretic forms Macedonian ubav ‘beautiful’  











ubav

ubaviot



ubava ubavi

ubavata ubavito



ubavo

ubavato

While typical adjectives in Macedonian have forms for three gender values in the singular, not all behave in this way. The adjective siromav ‘poor’ lacks feminine and neuter singular forms, resulting in the paradigm in (). ()

A paradigm with three intersecting features and defective cells

Macedonian siromav ‘poor’ indefinite definite sg m f n

pl

siromav

sg

pl

siromaviot siromasi

siromasite

Here several cells of the paradigm are greyed out, because no form is possible. This is known as defectiveness: the adjective is restricted to masculine singular or plural contexts, such that it is possible to say siromaviot čovek ‘the poor person’ (masculine), siromasite

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



lugje ‘the poor people’, but not *siromavata žena ‘the poor woman’ (feminine) or *siromavato dete ‘the poor child’ (neuter). The paradigms in () to () vary in their structure in terms of whether these featural distinctions apply to all lexical items of a distributional class or a subset only (e.g. cats vs. *healths), how many features are involved in the construction of the paradigm (e.g. two for Spanish adjectives vs. three for Macedonian adjectives), how many values each feature has (e.g. two gender values in Spanish vs. three in Macedonian), whether all the cells determined by the intersecting features have a unique form (e.g. gender syncretism in Macedonian plural adjectives vs. gender specific forms in the singular), and whether the features intersect to create forms at all (e.g. defective feminine and neuter cells in Macedonian singular adjectives vs. gender syncretism in the plural). With a base-definition established, and parameters of variation identified, it is possible to formulate this variation in terms of sets of values, such that every example under consideration (that is, every paradigm, form, syntactic configuration, or phonological representation that has the characteristics of the base-definition) must be characterized by one of those values. Some possible binary values characterizing variation in paradigm structure are given in (). ()

A subset of possible binary values characterizing variation in paradigm structure VALUE 

VALUE 

FEATURE

Only one feature defines intersections

More than one feature defines intersections

UNIQUENESS

Unique forms at feature intersections

Shared forms across feature intersections

FORMS

All intersections have forms

Subset of intersections without forms

If we consider the adjectival forms discussed above in light of these different variables, we instantly see that there is considerable variation across the patterns attested even within a small data set, as demonstrated by the matrix in (). ()

Matrix of three variables with binary values for a set of adjectives FEATURE V1 Spanish ‘tall’ Spanish ‘intelligent’

UNIQUENESS V1







A





B



C

Macedonian ‘beautiful’





Macedonian ‘poor’





V1

V2

PATTERN

V2



V2

FORMS



D

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Four different patterns are attested in this small dataset, demonstrating that a fine-grained approach to determining variables is fruitful for unveiling variation.

.. Identification of canonical values Once parameters of variation (i.e. variables) have been established, the values for each variable are assigned to an ordered scale that distinguishes between ‘canonical’ and ‘noncanonical’ behaviour. In existing applications of Canonical Typology, these scales, known as ‘criteria’ are typically binary, such that one value for each variable (either VALUE  or VALUE ) is canonical in nature, while the other is non-canonical.5 For instance, we might assume that all of the VALUE  properties in () are canonical properties of inflection. But how do we decide? Why would a paradigm defined by only one feature intersection be more canonical than one defined by multiple features? The process of determining which values are canonical is central to the application of the methodology, yet also the most opaque. Corbett (: ) describes instances with canonical values as “the best, the clearest, the indisputable ones”, making reference to the fact that they can be logically determined. To examine this process further, we will first look at the criteria for canonical inflection proposed in Corbett (, a, , ) and summarized in Stump (: –). In doing so, I review some fundamental principles that can guide the determination of canonical values, with the caveat that these are not necessarily applied in a consistent way in existing studies. Canonical Inflection is determined by a series of properties that identify canonical paradigmatic relationships. Here, we will consider three criteria identified at various points by Corbett. The concept of exhaustivity is introduced in Corbett (: ) and then explicitly named such in Corbett (a: ), following Spencer (: ). Completeness is discussed in Corbett (: ). Unambiguity (labelled as ‘unambiguousness’ by Stump : ) is discussed as ‘distinctiveness’ in Corbett (: ) and later publications (Corbett a, ). Based on criteria discussed in these works, the following set of clines explicitly state the canonical property for each criterion on the left, whereas the noncanonical property is on the right. A canonical inflectional paradigm is: Criterion : Exhaustive > Non-exhaustive Criterion : Complete > Incomplete Criterion : Unambiguous > Ambiguous

The first criterion (C) states that a canonical inflectional paradigm is exhaustive (in the sense of Spencer : ). Corbett (: ) claims that in a canonical system of inflection, every logically compatible combination of values of the morphosyntactic features relevant for an item within a given class defines a cell in a matrix. For instance, for Spanish In earlier work (e.g. Corbett ), proposed criteria also represent pseudo-continuous variables; however, this is more likely to have been a way of condensing a set of related variables into a single criterion, rather than an intentional statement of how the method can be applied (especially with respect to mathematical tractability). 5

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



adjectives, the relevant features for defining a paradigm are number (singular, plural) and gender (masculine, feminine), leading to a    matrix with a maximal set of four cells (as indicated by numerals in ()). A paradigm with the maximal set of cells is exhaustive. The second criterion (C) states that in a canonical paradigm there is a form associated with every defined cell (as indicated in () by check marks in each cell). A paradigm which has a form associated with each of its cells is complete. The third criterion (C) concerns the content of the cells. In a canonical paradigm each cell contains a unique form (as indicated in () by letters in each cell) from which it is possible to determine the relevant featurevalue combination by knowing the form alone. A paradigm with a distinct form in each of its cells is unambiguous. The difference in the concerns of each of these criteria is schematized by the    matrices in () below. ()

An exhaustive, complete paradigm with unambiguous forms EXHAUSTIVITY sg pl 1 3 m f 2 4

COMPLETENESS sg pl m ✓ ✓ f ✓ ✓

UNAMBIGUITY sg pl m A C f B D

Spanish adjectives that have distinct forms within each of the four cells defined by the matrix (e.g. alto ‘tall’) necessarily have canonical exhaustive, complete, unambiguous paradigms. However, exhaustive, complete paradigms do not necessarily have distinct forms in every cell. For instance, the matrices in () represent the properties of Spanish adjectives with only two forms (e.g. inteligente ‘intelligent’). Each matrix is identical to its counterpart in (), except the unambiguity matrix in which there are only two distinct forms (A, B). ()

An exhaustive, complete paradigm with ambiguous forms EXHAUSTIVITY sg pl 1 3 m f 2 4

COMPLETENESS sg pl m ✓ ✓ f ✓ ✓

UNAMBIGUITY sg pl m A B f A B

An obvious question that arises under this analysis is why the paradigm represented by the matrices in () consists of four possible content cells, and not two (distinguished by number values). This is because the properties of the paradigm with respect to C (completeness) and C (unambiguity) are only interpretable with respect to the number of cells defined for the paradigm by C (exhaustivity). Exhaustivity can be determined only by comparison of paradigms of different lexical items with the same syntactic distribution. Therefore, exhaustivity is a property of the paradigms of a distributional class, not of individual lexemes. Completeness and unambiguity are determined with respect to the defined shape of the paradigm of a distributional class. Unlike exhaustivity, they are properties of the paradigms of individual lexemes. At first sight the relation between the first criterion (C) and the latter two (C and C) appears to be evidence against the Precept of Independence, a basic principle of Canonical Typology which states that each criterion is logically independent (see Brown and

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Chumakina : – for discussion). However, this apparent dependence can be mitigated by recognizing that these criteria apply to different linguistic objects. The three criteria presented in this section can be restated in the following way: A canonical paradigm with respect to distributional class is: Criterion : Exhaustive > Non-exhaustive A canonical paradigm with respect to a lexeme is: Criterion : Complete > Incomplete Criterion : Unambiguous > Ambiguous

The exhaustivity criterion for a canonical paradigm is instantiated on the basis of the observation that in some languages the paradigms of particular lexical items do not exhibit the same exhaustivity with respect to possible feature-value combinations as might logically be expected. For instance, in Russian, the form of adjectives is determined by number (singular, plural) and gender (masculine, feminine, neuter). This gives rise to a    matrix with six logically possible cells, as illustrated in (). ()

A hypothetical canonical paradigm with six logically possible cells 





















Baerman, Brown, and Corbett (: ) note that although the combination of these feature values is logically possible, in practice gender is never distinguished in the plural part of the paradigm in Russian, with the paradigm shape in () being indicative of all (short-form) adjectives. ()

A paradigm with six logically possible cells but only four forms. Russian interesen ‘interesting’ 



 interesen  interesna interesny  interesno This paradigm is claimed to be non-canonical with respect to Criterion  since not all logically compatible combinations of feature values define a cell, as shown by the dotted lines in (). While dotted lines indicate hypothetical cells determined by the logical intersection of feature values, shading indicates the extent of Cell  in this paradigm.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ()



A non-exhaustive, complete paradigm with unambiguous forms EXHAUSTIVITY sg pl 1 m 2 4 f 3 n

COMPLETENESS sg pl m ✓ ✓ f ✓ n ✓

UNAMBIGUITY sg pl A m D f B n C

The crucial observation for assessing whether a paradigm is exhaustive or non-exhaustive is not whether there is a distinct form in every logically possible cell of the paradigm, but rather whether there is evidence for this combination of feature values being distinguished somewhere in the language. The paradigm for Russian adjectives is not exhaustive because we find no evidence anywhere in the language for a distinction in gender for plural forms, either in terms of the morphological form of Russian plural adjectives or in agreement with those forms. Despite the clarity it may bring, this exposition does not make clear why exhaustive paradigms are canonical while non-exhaustive ones deviate from the canon. To establish which value is canonical, it is necessary to evoke general principles of systemic organization (known as ) that provide the necessary epistemic structure to identify canonicity. Based on previous work within Canonical Typology (both in terms of morphology, morphosyntax, and beyond) these can be characterized as: (i) logical/mathematical principles (ii) functional principles. The principle that determines exhaustivity of a paradigm is strictly mathematical: it is the product n of the set of values for each feature defining each axis of the paradigm. Once the number of features (and values) defining a paradigm has been established, the n-value for exhaustivity is entirely predictable. We can say, then, that canonical inflectional paradigms are exhaustive because the number of cells in the paradigm is mathematically computable without knowing the types of features or values involved. Conversely, the number of cells in a non-exhaustive paradigm is not strictly predictable based on knowing this set of feature values alone.6 Therefore, having established two possible values for a parameter of variation (of the type discussed in §..), one way to identify the canonical value is to establish which one is mathematically predictable. This can be summarized by the following axiom. Axiom : Canonical values are mathematically predictable.

Having determined whether the paradigm matrix for a distributional class is exhaustive, it is possible to determine if a given lexeme has a complete paradigm. Completeness of a

6

In this model, we start by assuming that all features/values are of equal importance in defining the shape of the paradigm. This gives a baseline against which we measure the different ways in which features (and their values) actually interact, and their recurrent inter-dependencies (see Greenberg ; Corbett ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

paradigm can only be assessed with respect to the maximum number of cells that are defined for a distributional class. It is logically independent of exhaustivity, since both exhaustive and non-exhaustive paradigms can be considered complete or incomplete.7 Non-exhaustive paradigms, like those seen for all Russian adjectives and most Macedonian adjectives (e.g. ubav ‘beautiful’), are complete in that a set of forms is available to realize the possible cells defined for that distributional class. However, those with defective cells (e.g. siromav ‘poor’) are incomplete, since some of the cells that are defined for that lexical class cannot be expressed. ()

A non-exhaustive, incomplete paradigm with unambiguous forms EXHAUSTIVITY sg pl 1 m 2 4 f 3 n

COMPLETENESS sg pl m ✓ ✓ f n

UNAMBIGUITY sg pl A m B f n

Canonicity of a paradigm with respect to completeness (C) is defined in terms of a correspondence relation between the cells in the paradigm for which it is possible to express a form (n =  in (), as shown by the completeness matrix) and the actual number of cells defined for a paradigm (n =  in (), as illustrated by the exhaustivity matrix), rather than the predictable number of possible cells (which would be n = ). For a paradigm to be canonical, there must be a one-to-one (i.e. biunique) correspondence between the members of the set of defined cells (labelled by the relevant values at feature intersections, e.g. masculine singular) and members of the set of cells that are populated by forms. We can represent this by stating correspondence relations between a combination of values that define a given cell (e.g. masculine singular) and a form that expresses those values. A canonical correspondence relation is represented for the cells in () in () below. Here, the properties of cells are given on the left and the availability of a form to realize that cell is indicated on the right. Since this is only an abstract representation of forms, the availability of a relevant form is indicated by a label specifying the relevant cell’s set of values and a superscript. The arrows indicate that there is a one-to-one match between defined cells and available forms.8

7

For instance, an exhaustive complete paradigm is exemplified by all Spanish adjectives even though some adjectives show syncretism across the gender distinction. Exhaustive incomplete paradigms for adjectives are seldom if ever attested (Matthew Baerman and Anna Thornton, personal communication). In a hypothetical example of this type, the distributional class (in this case, adjectives) would have a full complement of logically possible cells defined by the intersection of feature values relevant to that class, but the paradigm of one or more lexical items would contain a defective cell that could not be realized by a form. The rarity (or perhaps non-existence) of data of this kind demonstrates that not all logically possible types are equally likely. However, see Baerman, Brown, and Corbett () for examples from nominal paradigms. 8 See Nichols () for discussion of biuniqueness in relation to Canonical Typology.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ()



Canonical correspondence relation between a defined set of cells and available forms CELLS FORMS masculine singular ↔ masculine singularFORM feminine singular ↔ feminine singularFORM neuter singular ↔ neuter singularFORM plural ↔ pluralFORM

When the number of forms available is less than the number of cells possible for a distributional class, there cannot be a biunique relation between cells and available forms, as represented for the cells in () in () below, where two of the cells are not associated with forms. ()

Non-canonical defective correspondence relation between a defined set of cells and available forms CELLS FORMS masculine singular ↔ masculine singularFORM feminine singular neuter singular plural ↔ pluralFORM

The motivation for specifying the relations between the sets in () as canonical and those in () as non-canonical is summarized by the following axiom: Axiom : Canonical relations are mathematically biunique.

Biunique relations are descriptively simple compared to one-to-many relations and it is implicit that descriptive simplicity underlies the notion of the ‘clearest instances’ in Corbett’s descriptions of the framework. One possible way of interpreting this is in terms of Kolmogorov complexity, which measures descriptive complexity in terms of computational resources needed to specify the object.9 Since axioms are general principles associated with a particular model, they can be used to determine the canonical value of an infinite number of criteria. For instance, Axiom  can also be used to account for the canonicity of unambiguous paradigms. These can be understood in terms of a biuniqueness relation (i.e. a one-to-one match) between the cells for which a form is available (e.g. masculine singularFORM) and actual realizations populating those cells. For instance, in (), there is a one-to-one match between the two cells with available forms (masculine singularFORM, pluralFORM) and the realizations (A, B) such that each realization corresponds to one available form only. In paradigms with syncretic realizations, there is a many-to-one relationship between the availability of a form to

9

The importance of measures of descriptive simplicity in morphology is apparent from recent work in the domain of morphological complexity, where measures of descriptive complexity are taken as a starting point for understanding and measuring morphological complexity. See papers in Baerman, Brown, and Corbett (a) for further details.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

populate a cell and individual realizations, as illustrated in (). Relations of this kind are central to Stump’s (a) formulation of content and form paradigms in Paradigm Function Morphology. The two axioms discussed here are mathematical in nature, reflecting the systemic properties of language. However, some canonical values might be best understood in light of the functional nature of language. In principle, it may be hard to draw the line between the two since functional canonical values can also be expressed in mathematical terms. For instance, the biuniqueness relation in Axiom  defines a canonical system as one in which each form in a paradigm is used for a unique combination of feature values leading to a more iconic system than one in which there is syncretism. For discussion of possible principles guiding identification of canonical values, see Bond ().

.. Extrapolation of the sample space In Canonical Typology extrapolations are made from observed data (i.e. patterns of variation between structures or languages) to establish a sample space of possible outcomes based on hypothetical combinations of these parameters.10 Types within the sample space are typically defined in terms of the presence vs. absence of a canonical trait for a set of variables. The canonical traits associated with a particular domain intersect on a single hypothetical archetype known as the canon. This provides a theoretical reference point from which the sample population can be calibrated. A distinctive property of the canonical method is the way in which the values under consideration intersect to define a theoretical set of possible types. This sample space may be visualized in various ways so as to reveal how a given population is distributed. The earliest visual representation proposed for what Corbett (: ) calls the ‘theoretical space of possibilities’ is a star formation in which the various criteria defined for a domain are represented by vectors that converge on a common origin, as seen in Figure .. At one end of each vector is the canonical value of the criterion (the centre of the star), while the non-canonical variable is at the other. Since canonical properties are said to point in the same direction they must be logically compatible, whereas noncanonical properties need not be so. The relative position and angle of the vectors is uninformative in this representation, although there may be good reasons to group axes representing criteria together for the sake of methodological clarity or to reveal patterns in data sets. The circles on the vectors in Figure . each represent the (relative) canonicity of a particular example along a criterion, with circles close to the centre indicating canonical properties. While in practice most criteria are binary, they are occasionally represented as a cline of three or more values, suggesting that there can be a gradual decrease in canonicity as you move away from the canon. This is depicted for Criterion  in Figure ., where the value is halfway between the canonical and non-canonical extremes.11 An alternative The term ‘sample space’ is borrowed from probability theory in which it is used to refer to the set of all possible outcomes or results of an experiment. 11 An example of a criterion like this is Corbett’s (: ) Criterion , which sets out a cline for the canonical expression of agreement on a target: inflection marking (affix) > clitic > free word. However, given that the points in this cline are analytical categories, not interval data, it is probably more fortuitous to deconstruct these into two or more separate binary criteria. 10

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  c1

c1 c8



c2 c8

c7

c3

c2

c7

c3

c6 c6

c4

c4 c5

 .. Star representing a canonical domain

c5

 .. Radar chart representing a canonical domain

graphical method for representing this space is a radar chart (suggested by Anna Thornton, personal communication) in which the variables are represented on radial axes starting at a common origin. The visualization in Figure . differs slightly from the representation in Figure . in that the vectors are evenly spaced and connected to adjacent vectors at regular intervals by lateral vectors. Values for each criterion are represented by a point on a cline, with canonical values at the centre. In Figure ., two polygons representing two different hypothetical structures are imposed on the same radar chart. The octagon indicated with the dashed line represents a canonical instance of the phenomenon represented, for which the value for each of the eight criteria is the canonical one. The superimposed irregular polygon represents a structure that is canonical along some criterial scales, but non-canonical along others. Note that C in this figure is not a binary scale, but is composed of three ordered values, demonstrating that this visualization technique is best suited to representing ordered scales with values at regular intervals. On the radar chart, the larger polygons indicate less canonical instances of a phenomenon, although the area of the polygon is partly determined by the order of the criteria on the radar chart and this measure should therefore be interpreted with caution. Perhaps the most instructive representation of the ways in which canonical and noncanonical values from different variables can intersect is provided in the form of a Boolean lattice. A simple lattice built from two intersecting criteria is given in Figure .. This lattice represents all possible combinations of values for two intersecting binary variables. Examples of Type  (at the top of the lattice) have the canonical values defined for C and C, while examples of Type  do not have the canonical values for either of these variables (represented by the empty braces). Examples of Type  have the canonical property defined by C, but not C, while examples of Type  have the inverse distribution of canonical traits. Considering the first two criteria for inflectional paradigms discussed above, Type  paradigms would be exhaustive and complete (all Spanish adjectives). Type  paradigms would be exhaustive, but incomplete (rare or non-existent with adjectives but evident in nominal paradigms; see Baerman, Brown, and Corbett  for examples). Type  paradigms would be non-exhaustive, but complete (most Macedonian adjectives). Type 

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



  1: c1/c2 2: c1

3: c2 4: {}

 .. Boolean lattice identifying four possible types based on two criteria Source: Based on Brown and Chumakina (: ).

paradigms would be non-exhaustive and incomplete (defective Macedonian adjectives). Recall that all Macedonian adjectives are non-exhaustive because the number of defined cells for adjectives (n = ) is less than the number of logically possible cells (n = ) based on the intersection of number (singular, plural) and gender (masculine, feminine, neuter). The four possibilities represented in Figure . are reminiscent of a tetrachoric table familiar from earlier work on relationships between the distribution of typological parameters (see Croft ). These tables were often used to demonstrate that certain combinations of syntactic variables are unattested or at least very rare (as with Type  in Figure .) in order to posit linguistic universals. To a certain extent, applications of Canonical Typology are concerned with identifying which combinations of values are attested and which are not, but the ultimate aim is to better understand the diversity in linguistic systems rather than to classify languages into crude types. Figure . represents the sixteen different combinatorial possibilities that arise from the intersection of four different criteria. The lattice is essentially an extension of the representation of intersecting values for different criteria in Figure .. This mode of representation is employed in Corbett () to investigate possible combinations of values for four criteria that define the sample space for canonical lexemes (resulting in a lattice like the one in Figure .). In his typology of lexical splits, noncanonical lexemes are internally heterogeneous in terms of morphological structure and externally inconsistent in terms of their syntactic requirements. A wide range of variation is encountered between morphologically and syntactically consistent canonical lexemes and the non-canonical extremes associated with this domain. In fact, the Boolean lattice of indicative instances of lexical splits is surprisingly complete, with every possibility in the lattice represented by a real-life exemplar (Corbett : ). This surprising fact is indicative of the usefulness of the Canonical Method for investigating familiar phenomena: This is not the result I expected, and it is not the result that a traditional typologist would have wished for. By staking out the typological space, more widely and more accurately than is possible in traditional typology, the canonical approach gives us a picture of what could theoretically be, which proves a useful frame for understanding what we actually find. (Corbett : )

The canonical method is compatible with a range of mathematical techniques for representing data, primarily because the variables are logically independent and typically

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



1: c1/c2/c3/c4

2: c1/c2/c3

6: c1/c2

7: c1/c3

12: c1

3: c1/c2/c4

4: c1/c3/c4

8: c1/c4

9: c2/c3

13: c2

14: c3

5: c2/c3/c4

10: c2/c4

11: c3/c4

15: c4

16: {}

 .. Boolean lattice identifying sixteen possible types based on four criteria Source: Brown and Chumakina (: ).

binary (canonical vs. non-canonical). This has not been fully explored or exploited in existing accounts of phenomena, but the types of representations used in multivariate typology (see, for instance, Bickel ) give an indication of the types of visualization techniques that are possible (provided the values encoded for each variable are of the correct type). The representation of data hinges largely on the way in which the variables (i.e. the criteria) are interpreted. When criterial variables are not binary, they usually represent an ordered set of ordinal categories rather than a genuine scale (e.g. bronze, silver, gold). Even when the scales include data that is interval-like (i.e. datapoints that can be interpreted as being at equal intervals on a scale such as , , , etc.) they are not necessarily straightforward (compare, for instance, measurements of length, in which every equal unit of measurement is equally distant from the next, with the values on the decibel scale, where two values which are equally distant from a midpoint are not comparable in terms of loudness). If criteria are treated as scalar, the representation of the sample space must necessarily be modified to account for the fact that (i) scales may be gradient and (ii) substantially more variation will need to be represented than currently possible using a Boolean lattice.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. A  

.................................................................................................................................. The application of the Canonical Method to understanding the properties of morphological systems and the notion of what constitutes a possible word has been particularly instructive because understanding the nature of morphology frequently involves investigating deviations from an archetypal paradigm with systematic regularity. While the majority of work in Canonical Typology has concerned inflection (§..), some of the canonical properties of derivation have also been identified (§..), with transpositional morphology still awaiting an in-depth investigation from this perspective (although see Nikolaeva and Spencer ).

.. Inflectional morphology Canonical inflection and non-canonical inflectional/lexical behaviour (Corbett , , ) have been investigated through studies on paradigm structure and organization (Stump b, , , a, c; Stump and Finkel , ), including treatments of morphotactics (Crysmann and Bonami ), reduplication (Kwon ), cumulation (Igartua ), syncretism (Baerman, Brown, and Corbett : –; Camilleri and Gauci ), deponency (Spencer a), suppletion (Corbett a), periphrasis (Chumakina ; Brown et al. ), stem alternations (Baerman and Corbett ), overabundance (Cappellaro ; Mörth and Dressler ; Thornton , , ), and inflection classes (Palancar ; Paciaroni ; Seifart ). The use of Canonical Typology as the basis for the computation of inflectional morphology is presented in Sagot and Walther (). From a theoretical perspective, the most important influence of Canonical Typology is on the development of Paradigm Function Morphology. In this approach, non-canonical inflectional behaviour represents deviations from canonical paradigm linkages (Stump and Finkel , ; Stump a).

.. Derivational morphology Unlike inflectional morphology, which is defined by the relationships between the cells of a paradigm constructed through the intersection of feature values and the realization of a stem, derivational morphology defines relationships between forms that cannot be captured by recourse only to features. Instead, derivational morphology is an indicator of a (synchronic and/or diachronic) relationship between two related forms ultimately differentiated by their (lexico-)semantic (and possibly (morpho-)syntactic) behaviour. Derivation has received considerably less attention in the Canonical Typology literature than inflectional morphology presumably due to the role of lexical semantics in accounting for regularity. The first full treatment of canonical derivation is found in Corbett (), although Nikolaeva and Spencer () have a brief, but similar, take on canonical derivation.12 In both cases, derivation is characterized in terms of the relation between a 12

Nikolaeva and Spencer () characterize canonical derivation as involving a single morphological process that gives rise to a new form which is semantically more complex than the

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



morphological base and another form.13 Fortin () discusses derivation and semantics, while Hathout and Namer () discuss derivation in French. Several attempts have been made to capture the similarity between inflection and derivation through derivational paradigms (see Štekauer  for an overview), but the analysis presented here should be sufficiently agnostic about this to be able to accommodate different views on the matter. When using the Canonical Method on derivational morphology, describing relations in terms of word-forms, rather than lexemes, is preferable for two reasons. First, it avoids the imposition of unnecessary analytical decisions within the base-definition (it may not always be clear that any two word-forms are manifestations of distinct lexemes). Secondly, it allows a wide range of word-forms to be considered potential examples of derivation (and thus allows for potential for full or partial overlap with other related domains). In the base-definition of the derivational domain provided here, I propose to capture what types of relations between forms should be considered as possible candidates for derivation.14 Derivational Domain: For a word-form to be a derived one, it must be semantically and/or syntactically differentiated from the morphological base of another form with (i) shared semantic content and (ii) shared phonological realization. The derivational domain contains any form that exhibits this type of relationship.

Much like the inflectional domain proposed in §.., this base-definition aims to capture the essential characteristics of derivation, but does not seek to place extensive limits on it. It recognizes that when a word is derived, it may be semantically distinct from its (morphological) base (cat (n.) ~ cattery (n.)), syntactically distinct (punch (n.) ~ punch (v.)) or both (freak (n.) ~ freakish (adj.)). It assumes that there is some sort of phonological similarity between the derived form and its base, but also that there is shared semantic content. Therefore, while code and cod share some phonological similarities, they are not semantically similar enough to be likely candidates for derivation. Similarly, while lion and tiger refer to semantically related (but different) concepts, they are phonologically very different. The degree to which items must be similar on semantic and phonological grounds is left deliberately vague here. Restrictions on this can be imposed through the application of the canonical method. The conception of the derivational domain also includes compounding of forms (in which a derived form has a relationship to more

base, adding a clearly identifiable semantic predicate. In canonical derivation the category of the base is opaque to syntax and a new lexical entry is created. They contrast this with canonical transposition, which is yet to be fully analyzed using the Canonical Method. 13 Here ‘base’ should be understood to mean an entity in morphology that undergoes modification of some kind. 14 In a strictly synchronic view of derivation, the relations between words are construed during acquisition or created by the speaker herself; they are not acquired as a result of formal instruction. For most speakers, the oldest historical record of their language will be the speech of the oldest members of the speech community to which they belong. Within such a view, it is natural for old derivations (where the semantics and/or the phonology are very different between the derived form and the base) to be entirely opaque and thus not part of this domain.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

than one base). While compounding is clearly a special way of creating new lexemes, it is not excluded from the derivational domain, since the base definition merely identifies the indisputable characteristics of derivation.15 It will not pass the careful reader’s attention that this definition also includes most types of inflectional relationship too. That’s because it is not a definition of derivation, to the exclusion of inflection, but a definition of characteristics shared by possible derivations— for instance the relationship between cat and cats meets the stipulations set out in the derivational domain proposed here. But that does not mean that all instances of inflection are derivation-like. For instance, suppletive stems lie outside this domain since they do not meet the phonological similarity restriction imposed in the base-definition. Corbett () discusses five criteria relevant to canonical derivation, three of which are discussed below to examine how they relate to the domains for derivation and inflection proposed here. Although reference is made to the descriptions of the criteria for canonical derivation proposed by Corbett (), they are not taken to be wholly suitable from a methodological standpoint since they introduce circularity by making reference to derivational markers.16 Here, we focus on the second, third, and fourth criteria proposed, slightly reworded so as to fit the format of presentation used here.17 Forms in a canonical derivational relationship have: Criterion : Regular (transparent) semantics > Irregular (non-transparent) semantics Criterion : Transparent form > Non-transparent form

Corbett (: ) states that the meaning of a canonical derived word can be computed regularly from the meaning of the base and the additional meaning of the derivation (C). In this sense, compositional meanings are canonical because they are systematic and predictable. As evidence for this variation, he provides examples of deverbal agent nouns in Russian which have regular semantics, as in (a–c), and compares them with those that do not behave in such a predictable way, as in (d) (from Corbett : ). () a. b. c. d.

15

pisat’ ‘write’ čitat’ ‘read’ osnovat’ ‘found’ dvigat’ ‘move’

pisatel’ ‘writer’ čitatel’ ‘reader’ osnovatel’ ‘founder’ dvigatel’ ‘motor’ (not ‘mover’)

The relationship between the meaning of a compound and the meanings of the individual forms from which it is composed may be largely opaque. For this reason, the base-definition proposed here does not qualify the exact nature of the semantic relationships between component forms, and only requires a semantic link between the new form and one of the compounded elements. 16 For instance, in describing transparent structure as a canonical property of derived words, Corbett (: ) states that canonical derived word-forms have an identifiable ‘derivational marker’ (or markers) distinct from the base of the word. The successful application of this criterion requires an existing ability to be able to distinguish derivation from inflection. 17 While they capture the essence of the distinction between inflection and derivation, some of the criteria proposed in Corbett () would be better suited to decomposition into finer-grained variables (particularly with respect to semantics), and this analysis is merely a first attempt at dealing with this domain of morphology within Canonical Typology, not the final word on it.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



Adapting this criterion to fit the Derivational Domain proposed above, it is possible to claim that the relation between the base dvigat’ ‘move’ and the form dvigatel’ ‘motor’ is less canonical than those in (a–c) because its primary meaning is not what would be predicted from the composition of the meaning of the base and the typical agent meaning associated with the derivational affix ‑el’. Pairs of forms like cat (feline) and cats (more than one feline) would also be canonical in this sense demonstrating that some of the canonical properties of derivation may be satisfied by examples of inflection. Motivation for the claim that regularity in semantics is canonical for derivation can be found in Axiom , discussed in §.. since compositional meanings are mathematically predictable if the composition is the sum of the individual parts, whereas irregular ones are not (and thus are more likely to be stored than computed). The third criterion for derivation posited by Corbett (: ) is that the internal structure of the word-form is transparent, such that a base is modified by the addition of a marker or markers (C). The concatenative nature of the relation between the pairs of forms in () is judged to be canonical (compare this with cat [kæt] and cats [kæts] for which it is also true). A less transparent formal relationship can be seen in the structure of heal [hiːl] and health [helθ]. Motivation for the claim that transparency in form is canonical for derivation can be found in Axiom . If canonical relations are biunique, then the most canonical relation will be one in which there is an identifiable exponent for each derivation. The fourth criterion succinctly distinguishes relationships between word-forms that are inter-lexemic and intra-lexemic (Corbett : ):18 Criterion : Distinct lexical index > Shared lexical index

This criterion determines that canonical derived forms have distinct entries in the lexicon. For many linguists, derivation is, by virtue, a process of creating new lexemes (e.g. Corbett ; Spencer a), but its inclusion as a criterion, rather than as part of the basedefinition, allows for forms in a canonical inflectional relationship to be treated as noncanonical along a parameter of variation within morphological systems, and allows for some murky ground in which inflectional forms with a shared lexical index have derivational-like properties.

. F   C T

.................................................................................................................................. Canonical Typology is a relatively young framework that has quickly established itself as an important approach to dealing with cross-linguistic data in a way that does not impose restrictions on variation, but rather is built from it. There are several key areas of development and expansion that seem likely as the current incarnation of the method becomes more robust. Foreseeable developments include the implementation of statistical Corbett (: ) calls this a ‘separate lexical index’. I have changed the wording to avoid any potential ambiguity associated with this characterization. 18

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

methods and more developed visualization techniques for showing relationships between criteria, forms, structures, and languages. These developments are likely to be allied with attempts to bring together existing studies in Canonical Typology to consolidate our view of relationships between different aspects of morphology. The most substantial body of work on Canonical Typology concerns a morphosyntactic phenomenon, namely agreement (Corbett ), demonstrating that the framework is not only suited to morphology, but that work at the interface with other components of grammar is fruitful. Canonical Typology has already been employed to examine other aspects of the morphology–syntax interface, including determining word-hood (van Gijn and Zúñiga ) and the nature of canonical clitics (Spencer and Luís b). Recent work by Nichols () has examined the role of Canonical Typology in accounting for registration-type head-marking across languages. It is likely that an excursion into canonical syntax and semantics will yield further benefits. Another core area for future development concerns the morphology–phonology interface. Some existing typological work conducted in (the conceptually similar) propertydriven typology has examined suprasegmental properties of language, namely pitch accent analysis (Hyman b) and related work on prosodic typology (Hyman ). The Canonical Method has recently been employed by Kwon and Round () to examine whether phonaesthemes are part of morphology. A large part of the future of Canonical Typology, and frameworks used in morphological theory more generally will be fully understanding the phonology–morphology interface and the relative autonomy of these components of grammar.

. C

.................................................................................................................................. Research using the Canonical Method deconstructs descriptive categories and theoretical concepts into fine-grained parameters of typological variation to explore whether logically possible combinations of values are attested within a sample space. It has proven particularly adept at accounting for a wide range of morphological variation, primarily in the inflectional domain and at the morpho-syntax interface. Application of the canonical typological method involves three key stages, each linked to the identification of three interdependent concepts: the base-definition, the criteria identifying parameters of variation, and the canon itself. The base identifies the broad empirical domain of the phenomenon under consideration, the criteria associated with that base define the independent dimensions of canonicity. Each of the values of a variable identified as a parameter of variation is placed on an ordered scale, called a criterion. For each criterion associated with a domain, one of its values is considered to be more  than the other(s) using logical axioms. The various criteria associated with variation in a phenomenological domain are then used to identify a (logically motivated)  as a theoretical reference point from which to calibrate the sample population (Corbett b, , , ; Brown and Chumakina ). The canonical values of each criterion converge on a canonical type, known as the  or  . Logically possible combinations of values for each variable intersect to map out the typological space of possibilities in which observed phenomena are to be calibrated. The canon associated with a particular

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 



base-definition is the unique point in typological space on which the canonical value for each of the criteria converge. This acts as a reference point from which non-canonical behaviour can be described.

. F  An online bibliography of research using Canonical Typology is available at: www.smg.surrey.ac. uk/approaches/canonical-typology/bibliography. Key texts on the application of canonical typology include Corbett (, a, ) and the papers in Brown, Chumakina, and Corbett (), especially Bond () and Brown and Chumakina ().

A I am particularly grateful to Jenny Audring, Matthew Baerman, Chiara Cappellaro, Grev Corbett, Tim Feist, Antonio Fortin, Francesca Masini, and Anna Thornton for extensive, thought-provoking discussion and comments on the data and issues discussed in this paper.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ........................................................................................................................

MORPHOLOGICAL THEORY AND OTHER FIELDS ........................................................................................................................

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

    ......................................................................................................................

    

. I

.................................................................................................................................. M as the study of word structure is intimately related to both language description and linguistic theory. Both these enterprises should be informed by cross-linguistic variation in the domain of morphology, albeit for different reasons. The task of a fieldworker or grammar-writer is to describe and interpret the morphological structure of an individual language as adequately as possible, including intricate details and idiosyncrasies. The task of a theoretical linguist, on the other hand, is to construct an empirically and explanatorily adequate model of language in general, or morphology in particular. Both descriptivists and theoreticians thus have to be aware of the range of morphological phenomena occurring in languages, and of the attested cross-linguistic diversity. In the ideal situation, they should also have access to information on the frequencies of certain cross-linguistic patterns, and on the genealogical, areal, and structural distributions of these patterns. The aim of morphological typology, as part of the broader linguistic typological enterprise, is to map the cross-linguistic variation and unity found in the domain of word structure, and to link this to other independently established typological generalizations. The typological study of morphology faces several challenges, the most important of which is the very nature of the empirical domain. As Baerman and Corbett (: ) put it, “[o]f all the aspects of language, morphology is the most language-specific and hence least generalizable. Indeed, even the very presence of a meaningful morphological component is language-specific.” Given this, it is hard to make statements about morphology that are cross-linguistically valid. Even comparing morphological phenomena in different languages requires the typologist to carefully devise and cautiously apply analytical notions and methods. Comparative notions cannot be directly “borrowed” from descriptive studies of individual languages. Such commonly accepted notions as “root”, “affix”, “lexeme”, “paradigm”, and the very notion of “word” itself, have proven to be notoriously difficult

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

to define in a cross-linguistically valid way (see §.). The current state of research has to acknowledge the fundamental problem that none of these notions can be applied crosslinguistically to yield consistent results throughout. Typology has often been associated with the quest for language universals. However, from the outset it has also been clear that the study of rare and unique patterns is as important as the study of cross-linguistically recurrent ones (see e.g. Plank no date; Wohlgemuth and Cysouw ). This is especially true for morphology, where many, perhaps most, of the attested patterns are rare, or obviously non-universal. However, crosslinguistically unique patterns can be and usually are revealing of the range of possibilities open for human language structures, and reflect—albeit in a paradoxical way—potentially universal patterns admittedly common to all languages. To give a striking example, the Australian language Kayardild (see Evans ; Round ) overtly marks clausal morphosyntactic features, such as case role, tense, and mood, on each word of a relevant constituent, cf. example (), where the Instrumental case appears not only on the head of the noun phrase but on its Genitive modifier, too, while the Ablative and the Oblique suffixes mark past tense and epistemic modality, respectively. Kayardild (Tangkic, Northern Australia; Evans : )1 a. dangka-karra-nguni mijil-nguni man--1 net-1 ‘with the man’s net’ b. maku yalawu-jarra yakuri-na dangka-karra-nguni-na mijil-nguni-na. woman catch-2 fish-2 man--1-2 net-1-2 ‘The woman caught some fish with the man’s net.’ c. maku-ntha yalawu-jarra-ntha yakuri-naa-ntha woman-3 catch-2-3 fish-2-3 dangka-karra-nguni-naa-ntha mijil-nguni-naa-nth. man--1-2-3 net-1-2-3 ‘The woman must have caught fish with the man’s net.’

()

Except for the closest relatives of Kayardild, this phenomenon is not attested in any other language. This unique feature of Kayardild shows a logical and beautifully iconic mapping of the hierarchical structure of syntax on the morphological structure of words, which is largely obscured in other, less “exotic”, languages. Unique patterns like this one might well turn out to be no less instructive for linguistic theory than cross-linguistically recurrent ones. Moreover, typological rara are crucial for morphological description, since morphology is precisely the domain where irregular, idiosyncratic, and unfamiliar phenomena are most expected to occur. All of these phenomena require accurate, detailed, and unbiased documentation. The aim of the present chapter is to present a concise overview of the current state of typologically oriented research in morphology, and to suggest ways in which morphological typology and theory can enrich each other. While we address both empirical and methodological issues, we refrain from discussing the technical details of any particular 1

Glossing is slightly simplified; coindexation indicates “concord” relations between inflections.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



theoretical framework. None of the current morphological theories is probably able to adequately account for the plethora of morphological phenomena attested in the world’s languages, but most of them have contributed significantly to our understanding of many of these phenomena. Morphology is “the grammar of words” (cf. Booij d). In what follows, we first discuss the notion of “word” and the issues surrounding it in §.. The primary goal of morphological typology and theory is to analyze the ways in which languages establish relations between forms and meanings when they build words, and to discover the principles underlying the cross-linguistic variation in this domain. This relation between meaning and form in morphology is the topic of §.. Another important domain of morphological inquiry are the syntagmatic and paradigmatic relations between words and their components. In §§. and ., we briefly review empirical and theoretical issues relating to the syntagmatic and paradigmatic dimensions of cross-linguistic diversity in morphology.

. T   “”   

.................................................................................................................................. As the notion “word” is central to morphology, its definition and identification are crucial both for morphological analysis and morphological typology. There are two relevant understandings of “word”. On the syntagmatic axis, we have to distinguish wordforms from phrases and parts of words (i.e. morphemes), while on the paradigmatic axis we need to identify lexemes, that is, sets of wordforms sharing lexical meaning and differing in the values of inflectional features only. Both understandings of “word” create their own problems, which will be discussed in turn in §§.. and ...

.. Is “wordform” a typologically valid concept? Bloomfield (: ) defined “word” as the “minimal free form”. However, it has proven to be notoriously difficult to identify what precisely a “minimal free form” is, especially in languages that have no written tradition and are not used in formal education. Moreover, some languages have numerous lexical items denoting various events or activities of verbal communication, but lack a word for ‘word’, for example Kambera in (). ()

Kambera (Austronesian; Sumba, eastern Indonesia; Onvlee ; Klamer ) hilu ‘a verbal exchange; a language’ lí ‘a sound, a story, an event, a tradition; to speak’ luluk ‘a proverb, a speech’ langu ‘a message, something that is being talked about, a situation’ pulung ‘an advice, an order, a judgment, a gossip; to gossip’ kareuk ‘to talk’ reu ‘sound of talking’

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Wordforms in different languages can only be identified using structural criteria, both phonological and morphosyntactic (see e.g. Dixon and Aikhenvald a; Julien ). Most of these criteria are language-specific, and often they yield conflicting results even in the same language (Haspelmath ; van Gijn and Zúñiga ). It is necessary to keep in mind that phonological criteria (such as the assignment of primary stress, the tonal contour, or the domain of phonological phenomena like vowel harmony or sandhi) identify phonological words which do not always align with grammatical or morphosyntactic words (cf. Bickel and Nichols : –; Bickel and Zúñiga ). The morphosyntactic word is the unit that pre-literate speakers most often associate with the term “word”. It is the minimal response that speakers would give to a question like “what is the name for that [pointing at object] in your language?” It is usually also the smallest linguistic unit that can be subject to such syntactic operations as coordination, movement (e.g. in questions), or ellipsis. This is accounted for by the Principle of Lexical Integrity proposed in certain formal theories of grammar (e.g. Di Sciullo and Williams ; Spencer a; see also Montermini, Chapter  this volume); according to this, syntax cannot manipulate the internal structure of words.2 The morphosyntactic word is also the unit that is the outcome of morphological word-formation processes, and the basic unit used by speakers to build more complex expressions (i.e. syntactic phrases). It is also the unit on which speakers typically apply self-repair when they are telling a story or having a conversation. For instance, when mispronouncing a word, a speaker’s self-repair will often involve repeating the entire morphosyntactic word, rather than a part of it (cf. e.g. Wouk ; Podlesskaya : –; cf. Fox et al.  for a typological study). Phonological words can be preceded and/or followed by conscious and deliberate pauses and intonation breaks, while speakers seldom make such breaks in the middle of them. This does not mean that a natural text will not contain word-internal breaks or pauses; indeed, all natural texts contain hesitations, self-repairs, and false starts occurring in the middle of words. However, speakers are normally able to recognize these as “errors” when they listen to the recording, and they consider the utterance without an internal break or pause as the “correct” form. Despite the theoretical and practical importance of the notion of morphosyntactic word, different diagnostics do not always converge. Well-known cases are the German, Dutch, and Hungarian separable verbal prefixes (see e.g. Ackerman and Webelhuth : ch. ; Müller ; Zeller  on German; Booij , b on Dutch; Ackerman , Ackerman and Webelhuth  on Hungarian), illustrated in (). On the one hand, preverbs such as German aus ‘out’, an ‘at’, or ein ‘into’ (a–d) form a tight semantic and syntactic unit with the verb following them, which is reflected in the orthography (a)—a compound, as evidenced by the stress pattern of the preverb+verb complex, the ability of the preverb+verb complex to serve as an input to word-formational operations (German áusgehen ‘go out’ ~ Áusgang ‘exit’), and the fact that many such combinations have idiomatic meanings and therefore must be listed in the lexicon as units. On the other

However, see Baker (, ) for a model of syntax–morphology interaction apparently discounting lexical integrity, together with much work in the framework of Distributed Morphology (Siddiqi, Chapter  this volume). From a different perspective, Haspelmath () also argues against lexical integrity as a universal principle of grammar. 2

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



hand, there is evidence that the preverb and the verb do not form a single phonological or morphosyntactic word even when adjacent, and moreover, the preverb can be detached from the verb and, in German and Dutch, be separated from it by long and syntactically complex strings of words; such free standing preverbs behave like autonomous words in that they are able to bear independent stress (b), be focused (c), and be coordinated (d). ()

German (Indo-European) a. Er sagt, dass er uns ein Bier áusgibt. ‘He is saying that he is going to buy us a beer.’ (Zeller : ) b. Er gibt uns ein Bier áus. ‘He is buying us a beer.’ (Zeller : ) c. Ich lache dich nicht áus, sondern án. ‘I’m not laughing at you, I am smiling at you.’ (Zeller : ) d. Die Türen öffnen sich, Leute steigen áus und éin. ‘The doors open, the people are getting off and on.’3

Another issue relating to the notion of word refers to the level above the word. How can we distinguish morphologically complex words, e.g. compounds, from syntactic phrases (cf. Lieber and Štekauer a)? Phrases and compounds can look quite similar because the latter often derive historically from the former. The wordhood of a compound in contrast to a multi-word phrase is often determined semantically: the meaning of a compound is typically not the sum of its parts, while the meaning of a phrase is typically regular and transparent (compositional). In addition, components of compounds usually show referential opacity, that is, they cannot on their own refer to discourse participants (see however Koptjevskaja-Tamm  on an interesting case of compounds formed from personal names). In relation to this semantic compositionality, we see that parts of phrases can also be modified separately (a very black board), while this is not possible for the parts of a compound (*a [very black] board). However, the semantic distinction between phrases and compounds is never categorical: languages with semantically irregular and nontransparent compounds also often have semantically regular and transparent ones, just as probably every language has phrases that are idiomatic (see e.g. Di Sciullo and Williams  on the distinction between words and “listemes”). Again, phonological and morphosyntactic criteria have to be invoked in order to distinguish phrases from compounds. Thus, in English noun phrases main stress is claimed to be on the head (a black bóard), whereas nominal compounds have stress on the modifying element instead (a bláckboard) (see, however, Giegerich  against such a view); in German, adjectival modifiers in phrases must be inflected for gender, number, and case (e.g. ein roter Kohl ‘a red cabbage’), while this inflection does not appear in compounds (e.g. Rotkohl ‘red cabbage’). In languages with noun incorporation, the incorporated nominal root may occur between the inflectional affixes and the root of the verb, and be subject to word-internal phonological processes, as in Chukchee, ().

3

http://www.hna.de/kassel/hilfe-leichter-sprache-.html, accessed  February .

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 ()

     Chukchee (Chukotko-Kamchatkan, Russia; Mithun a: ) a. gam-nan tə-ntəwatə-rkən utkucʔ-ən. - -set- trap- ‘I am setting a trap.’ b. gəm t-otkocʔə-ntəwatə-rkən . -trap-set- ‘I am trap-setting.’

However, morphosyntactic criteria like these cannot be usefully applied to languages that lack phrase-internal inflectional concord, or languages that have only suffixes and no prefixes. Thus, in Persian, idiomatic noun+verb combinations (a) are on the surface indistinguishable from verb phrases with non-specific bare nouns (b). ()

Persian (Indo-European > Iranian, Iran; Megerdoomian : ) a. kotæk xordæn lit. beating eat ‘to be beaten’ færib xordæn lit. deception eat ‘to be deceived’ šekæst xordæn lit. defeat eat ‘to be defeated’ b. qæza xordæn lit. food eat ‘to eat’ xyar xordæn lit. cucumber eat ‘to eat cucumber’ šam xordæn lit. dinner eat ‘to eat dinner’

Even in highly inflectional languages like Russian there is a continuum, illustrated in (), where phrases formed in syntax occupy one end (a), unequivocal compounds with linking elements occupy the other end (e), and cases with doubtful status occur in between (b–d) (cf. Benigni and Masini ; Masini ; see also Booij a: ch.  on “phrasal names”). ()

Russian (personal knowledge of P.A.)4 a. želézn-aja mísk-a iron-.. bowl-. ‘iron bowl’ b. želézn-aja doróg-a iron-.. road-. ‘railway’ c. krésl-o=kačálk-a armchair-.=rocker-. ‘rocking chair’ d. generàl=gubernátor general=governor[.] ‘governor-general’ e. svin-o-férm-a pig--farm-. ‘pig farm’

syntactic phrase (adjective+noun)

phrasal name (adjective+noun)

doubly inflected noun+noun compound

compound without a linking element

compound with a linking element

The “=” sign stands in place of the orthographic hyphen, while the hyphen indicates morpheme boundaries; the acute and the grave signs mark primary and secondary stress, respectively. 4

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



Distinguishing between compounds and phrases is especially difficult in languages where syntactic operations apparently create morphologically complex words. Thus, in Adyghe, an adjectival modifier obligatorily forms a compound with its head noun, as illustrated in (). The resulting phrase inflects as a single unit, and forms a single domain for stress and phonological alternations (Lander , ). Some such compounds are idiomatic, but most are formed by general syntactic mechanisms in the course of speech. ()

Adyghe (West-Caucasian > Circassian; Lander : ) Ø-jə-zə-šolk-ǯʼene-daxe-r .--one-silk-dress-beautiful- ‘one beautiful silk dress of hers’

Another problematic issue in the definition of wordforms is clitics, which show properties of both words and affixes (see Bickel and Nichols : –; and especially Spencer and Luís a, b for a comprehensive discussion and references). Phonologically, clitics are not free forms, as they must attach to a host with which they form a single prosodic domain. Morphologically, they often behave like affixes in displaying fixed order and various co-occurrence restrictions and idiosyncrasies. Syntactically, however, clitics and clitic clusters show more freedom than genuine affixes, which normally attach to hosts of a particular category. Clitics may attach to the edges of a syntactic phrase, or their position is structurally defined as following the first stressed word or first phrase of a sentence (so-called ‘second-position’ or ‘Wackernagel’ clitics, cf. Anderson , ), as in Cupeño (). ()

Cupeño (Uto-Aztecan > Northern, California; Hill : ) hani=qwe=n=pe ilily-i mamayew. === coyote- help. ‘I wish I could help Coyote.’

Despite being notoriously difficult to define and identify typologically (Haspelmath ), clitics, and in particular second position clitics, are an important and widely attested phenomenon. The terms “clitic” and its derivatives like “clitic doubling” or “clitic left dislocation” should however be used with caution and be clearly defined in contrast to affixes and free-standing wordforms. In sum, the concept “word” is not simple and not clear-cut: many criteria for wordhood are applied language-specifically; some yield conflicting results in a single language, and often words in a language take different positions on the continuum going from ‘word’ to ‘phrase’. That “word” is not a category with robust boundaries is a problem for theories built around the idea that syntax and morphology are clearly distinct modules. Some eschew the problem by deeming the very notion “word” invalid, and the distinction between syntax and morphology irrelevant for linguistic theory (e.g. Haspelmath ). Instead, we believe that it is worthwhile to investigate the typological space generated by various wordhood properties in order to arrive at empirically grounded generalizations about combinations of such properties and their cross-linguistic patterns (cf. Bickel and Zúñiga ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

.. Inflection vs. derivation and the notion of “lexeme” Orthogonal to the problem of the definition of the wordform is the issue of the delimitation of lexemes and, consequently, of inflectional paradigms. The notion “lexeme” is roughly equivalent to a lexical entry in a dictionary. A lexeme is, by definition, a set of wordforms distinguished solely by inflectional features and their exponents. Therefore, the delimitation of lexemes crucially hinges on the distinction between inflectional and derivational morphology, the latter creating new lexemes. Though apparently clear-cut in simple cases like (to) walk ~ (she) walks ~ walked (inflection) vs. walk ~ walker (derivation), the distinction between inflection and derivation has proven notoriously difficult to specify in an adequate and unproblematic way (Bybee : ch. ; Dressler ; Plank ; Laca ; Spencer a). The common intuition that derivation feeds the lexicon, while inflection is relevant to syntax (cf. the “Split Morphology hypothesis”, Anderson ; Perlmutter ; Scalise ; Bickel and Nichols : –) is demonstrably wrong. Derivation may have syntactic repercussions (e.g. in causativization or in nominalization), and some inflection is not directly relevant to syntax (cf. the distinction between “contextual” and “inherent” inflection introduced by Booij ,  or between “early” vs. “late system morphemes” in Myers-Scotton ; these notions are not unproblematic themselves, see Spencer a: –). In most recent discussions of inflection and derivation—in both descriptions of individual languages and typological studies—they are regarded as two poles on a continuum structured by a set of features (Dressler ; Plank ; Nau ; Haspelmath and Sims : ch. ; Corbett ; Spencer a). In Table . we list some of the familiar features (cf. Haspelmath : –; Booij : –; Kroeger : –; Brown and Hippisley : ). These features are useful as heuristics to place particular morphological processes on the continuum between prototypical inflection and prototypical derivation (with different uses of the same morpheme often occupying different positions on the scale, see e.g. Say  on Russian reflexive verbs). However, morphological typology and morphological theory should ask the empirical question whether these two traditionally recognized clusters of properties are the only ones attested in languages. The answer is in the negative (see Spencer a for a recent comprehensive and convincing discussion). Thus, Bauer (b) proposes a six-way classification of morphological processes, setting valency-changing, class-changing, and evaluative formations aside from other kinds of derivational morphology as being regular and in some sense paradigmatic, and in opposition to inflectional morphology, which does not create new lexemes. This latter criterion of new lexeme creation, in our view, is problematic not only because it obviously involves circularity, but also on purely empirical grounds. In languages with highly productive and compositional valency- or class-changing operations, it is hardly feasible to treat all such cases as distinct lexemes (cf. Spencer a: –). For example, in Adyghe there are about a dozen applicative prefixes which add an object to the valency frame of the verb (Smeets ; Lander and Letuchiy ), cf. (a) with a benefactive applicative and (b) with a comitative one. Not only do these applicatives occur farther from the root than certain markers of contextual inflection such as prefixes cross-referencing the agent (b), but their occurrence is sometimes obligatory and often fully semantically transparent, so postulating separate lexemes is not a viable descriptive option.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



Table .. Features of prototypical inflection and derivation Parameter

Inflection

Derivation

Function

Does not change syntactic category of a word

May change syntactic category of a word

Meaning

Often has purely grammatical meaning

Tends to have lexical semantic content, i.e. meanings similar to the meanings of independent words

Regularity

Is often semantically regular

May have unpredictable semantic content

Syntactic determinism

Is often syntactically determined

Does not require a specific syntactic environment

Obligatoriness

Function is obligatory

Function is not obligatory

Productivity

Is highly productive

Often applies only to certain words, or classes of words

Paradigmaticity

Is often organized in paradigms

Is often not organized in paradigms

Fusion

Can be marked by portmanteau morphemes

Is rarely marked by portmanteau morphemes

Recursivity

Is marked only once in the same word

May apply twice in the same word

Position

Occurs in a peripheral position near the edges of a word

Occurs in a central position close to the root

()

Adyghe (examples from narratives, Yu. Lander, p.c.) a. wešʼx q-a-f-je-šʼxə-r-ep. rain -.---rain-- ‘it does not rain for them’ b. zə-qə-b-d-jə-ʔetə-šʼt .--.--.-raise- ‘it will go up together with you’

Another typologically important notion has been proposed by de Reuse (), who singles out “Productive Non-inflectional Concatenation” (PNC) as a special kind of morphology distinct from inflection and derivation and sharing many features with syntax; see Table . and example (). PNC is especially characteristic of polysynthetic languages such as those of the Eskimo-Aleut or Abkhaz-Adyghe families, but is also attested, though rarely, in familiar European languages (e.g. the English productive and potentially recursive prefix anti-, de Reuse : ). ()

Central Siberian Yupik Eskimo (Eskimo-Aleut, Alaska and Chukotka; de Reuse : ) negh-yaghtugh-yug-uma-yagh-pet-aa eat-go.to-want.to----.> ‘It turns out s/he wanted to go eat it, but . . . ’

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Table .. Productive noninflectional concatenation

Productivity Recursivity Necessarily concatenative Variable order possible Interaction with syntax Category change

Inflection

(Nonproductive) derivation

PNC

Syntax

yes no no no yes no

no no no no no yes

yes yes yes yes yes yes

yes yes yes yes yes yes

Source: de Reuse (: ).

In conclusion, the traditional notions of inflection and derivation are associated with a large number of empirical and conceptual problems, and both morphological theory and typology should address these problems in order to arrive at a cross-linguistically informed and unbiased set of concepts and distinctions, which will most probably yield a multidimensional space rather than a binary opposition (cf. again Spencer a: ch. ).

. T       

.................................................................................................................................. Morphology is the relation between meaning and form in the structure of words, cf. the title of Bybee (). The primary goals of morphological typology and theory are thus to determine the ways languages connect meaning and form, and to discover the principles underlying the cross-linguistic variation found in this domain. There are two important dimensions of morphological variation in relating meaning to form (apart from the variation in the morphologically encoded meanings themselves), cf. Anderson (a: ). The first dimension is how morphological meanings are expressed and how such expressions are organized with respect to each other (morphological exponence and morphotactics). The second is how expressions with the same meaning may vary in context (allomorphy). Both of these dimensions have figured prominently in the classic morphological typology since at least Friedrich von Schlegel and Wilhelm von Humboldt (cf. Rousseau ). They are reflected in the traditional typological classification of languages into “isolating”, “agglutinating”, and “flexive” types, using criteria such as cumulative vs. separatist exponence of morphological features, fusion between stems and affixes, and presence of phonologically opaque alternations of stems and affixes (for an overview see Plungian ). As with any “holistic” approach to typology, this classic typology has proven to be inadequate because languages rarely behave uniformly with respect to the different criteria (Plank ; Haspelmath ). Instead of few discrete classes we must again assume a multidimensional typological space that is yet to be fully investigated (for earlier proposals in this vein see e.g. Sapir  and Alpatov ; the latter is discussed in English by Testelets : –).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



Table .. Deviations from biuniqueness according to Carstairs ()

syntagmatic axis paradigmatic axis

many meanings ~ one form

many forms ~ one meaning

cumulation syncretism

extended exponence allomorphy

Table .. Deviations from biuniqueness in Russian nominals ‘brother’

Nominative Accusative Genitive Locative Dative Instrumental

‘mother’

Singular

Plural

Singular

Plural

brát brát-a brát-a brát’-e brát-u brát-om

brát’-j-a brát’-j-ev brát’-j-ev brát’-j-ax brát’-j-am brát’-j-am’i

mát’ mát’ mát’er’-i mát’er’-i mát’er’-i mát’er’-ju

máter-i mater’-éj mater’-éj mater’-áx mater’-ám mater’-ám’i

Note: For the sake of consistency, palatalized consonants are marked by ’ throughout, including cases of automatic palatalization not reflected in the orthography.

A useful starting point for studying the meaning–form relations in morphology is the idealized model that assumes a biunique mapping between meaning and form, with each morphological feature or ‘meaning’ expressed by only one form, and each form expressing only one such ‘meaning’ (cf. Dressler : ). Most languages display certain deviations from this ideal, and the cross-linguistic investigation of such deviations is one of the primary concerns of morphological typology. A classification of such deviations has been proposed by Carstairs (: –), see Table .. (See also Carstairs-McCarthy , , , b, .) Table . with a subset of the Russian nominal declension illustrates all four types of deviations from biuniqueness identified by Carstairs. The expression of case and number values in Russian is cumulative and often syncretic (thus, in ‘brother’ AccSg = GenSg, in ‘mother’ NomSg = AccSg, GenSg = LocSg = DatSg = NomPl, and in both nouns AccPl = GenPl). The plural subparadigm of ‘brother’ involves extended (or multiple) exponence of number, since the plural is expressed both by the suffix ‑j- and by cumulative case-number endings. Finally, there are numerous instances of allomorphy of both stems and affixes, the latter clearly showing the distinction between two inflection classes. Another point of departure for the typological investigation of morphological phenomena is the “canonical inflection” model proposed by Corbett () and further refined in Corbett (a, b; see also Bond, Chapter  this volume), which can be viewed as an extension of Carstairs’ classification; see Table ..

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Table .. Corbett’s “canonical inflection” and deviations from it comparison across cells of a lexeme

comparison across lexemes

“canon” deviation

“canon” deviation

composition/structure same

fused exponence periphrasis

same

lexical material

same

stem alternations suppletion

different homonymy

inflectional material

different syncretism uninflectability

same

defectiveness overdifferentiation anti-periphrasis

inflection classes heteroclisis deponency

Most of these phenomena have been investigated from a cross-linguistic perspective by the Surrey Morphology Group (see http://www.smg.surrey.ac.uk/projects/), cf. Brown et al. (), Chumakina and Corbett () on periphrasis; Corbett (a), Corbett et al. () on suppletion; Baerman, Brown, and Corbett (), Baerman and Brown (a, b) on syncretism; Corbett (), Baerman (, ) on inflection classes; Baerman et al. () on deponency; Baerman, Corbett, and Brown () on defectiveness, and many others; a similar perspective with some non-trivial extensions is provided in Stump (a); cf. also Harris () for a typological study of multiple exponence. Though most of these phenomena have usually been considered by typologists and theoretical linguists as “exceptions” and “irregularities”, their cross-linguistic study has proven to be not only possible, but fruitful and instructive by showing what types of mismatch between meaning and form are possible in morphological systems, how they interact with each other and with syntax, and what kind of motivations may underlie them. One of the extreme cases of form–meaning mismatch in morphology is the so-called “distributed exponence” (Caballero and Harris : –). In this type of mismatch, the grammatical interpretation of a wordform is constructed through the unification of the meanings of several morphemes, each of which is underspecified with respect to particular feature values. Perhaps the most striking examples of this kind of morphological organization come from the Yam family of New Guinea (Evans , ). In Yam languages, the morphological features of participant person and number, aspect, and tense rarely have dedicated exponents, but are inferred from particular combinations of affixes and stem allomorphs, each associated with several distinct feature values. An illustration is the Komnzo verbal form presented in Figure ., where four of the morphemes (including the lexical stem fath-) combined in the word map to various feature values in complex ways. Another dimension of morphological diversity is the type of exponence that languages employ (cf. Trommer ). Concatenative or linear exponence by means of prefixes and suffixes, as well as reduplication,5 is the most common type of morphological expression cross-linguistically. However, various kinds of non-concatenative morphology also abound In the sample of Rubino () there are five times as many languages with reduplication () as languages without (). 5

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



[2|3 PL] > [3 SG MASC] NPST IPFV ANDAT

y–

fath

–wr

–o

–th

‘They hold him away.’

 .. Distributed exponence in Komnzo (Yam, Papua New Guinea) Source: Döhler (: , Fig, .). Reproduced with permission

Table .. Non-concatenative exponence in Dinka nouns

Absolutive Oblique st construct state nd construct state Allative Inessive-ablative

‘ground’

‘house’

‘fire’

pì ̰ ɲ pî ̰ ɲ pì ̰ ɲ pyε̰ ὲ ɲ pì ̰ ɲ pií ̰iɲ

ɰò̤t ɰô̤t ɰò̤n ɰɔ̤ ̀ɔn ɰó̤t ɰò̤t

mà̰ac mâ̰ac mà̰aɲ mà̰aɲ mεˆε̰ εc mέ̰εεc

Source: Andersen (: ).

in the languages of the world. These include infixation, vocalic and consonantal alternations, truncation, as well as non-segmental exponence such as stress and tone changes, and combinations thereof. Probably the best-known and most widely studied case of nonconcatenative exponence is the Semitic root-and-pattern morphology (McCarthy ; Arad and Shlonsky , among many others). However, perhaps the most striking case of non-concatenative morphology comes from the Western Nilotic language Dinka (Andersen , , ). Dinka words are largely monosyllabic, but the language has considerably elaborate morphological paradigms. Affixal exponence is almost absent in Dinka, and most morphological properties are expressed by means of alternations in vowel length, consonant and vowel quality, voice quality, and tone, cf. Table .. Such exuberant non-concatenative morphology is instructive for descriptive linguists, who must be aware that investigating the morphology of a language may require sophisticated phonetic and prosodic analysis. It also presents challenges for morphological theories which assume linear morphological exponence to be the default case (e.g. Bye and Svenonius ) or regard affixal exponence as fundamentally distinct from stem alternations (e.g. Carstairs-McCarthy b, ). Non-concatenative morphology is also said to be a hallmark of sign languages, see e.g. Aronoff et al. (), Aronoff, Meir, and Sandler (), and Napoli (Chapter  this volume). Orthogonal to type of exponence is the locus of marking, that is, the distinction between head-marking and dependent-marking introduced by Nichols (), cf. Bickel and Nichols (a; : –). Perhaps most importantly, this morphological property of ‘locus’, whose values are unevenly distributed across language families and linguistic areas, has been shown to correlate cross-linguistically with other typological variables such as basic word order and morphosyntactic alignment (Nichols ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

In sum, studying the relation between meaning and form in morphology has been a central issue in morphological research, and has led to a number of different typological classifications. While classic holistic classifications have been proven to be inadequate, more useful approaches have studied meaning–form relations in morphology as departures from a biunique mapping between meaning and form, or as having more or less canonical properties. Other dimensions of morphological typology are constituted by the locus and type of morphological exponence, and here it is worth emphasizing that although concatenative exponence and dependent-marking are prominent in the more familiar European languages, non-concatenative expression and especially head-marking are widely attested in the world’s languages and thus have to be accounted for by any theory of morphology.

. S    

.................................................................................................................................. One of the traditional fields of morphological inquiry concerns the syntagmatic relations between the components of complex words. In this field, affix ordering has featured prominently, starting perhaps with Greenberg’s (/) Universals # concerning the mutual order of inflectional and derivational affixes and # concerning the mutual order of case and number affixes (see Baker ; Bybee ; Muysken ; Stump , a; Cinque ; Mithun b; Paster ; Manova and Aronoff ; Spencer a: –; Manova ; for a general overview see Rice ). Among the universal principles explaining cross-linguistic tendencies in affix ordering, Baker’s () Mirror Principle—couched in the generative framework—and Bybee’s () Principle of Relevance—from an expressly functionalist perspective—both reflect the observation that if a language has words hosting more than one affix in sequence, the relative ordering of the affixes is largely steered by semantics. In many languages this is manifested in verbal affixes occurring in the order “(verbal root)-aspect-tense-mood-person” (Bybee : –). This order corresponds both to the meanings’ decreasing degree of “relevance” to the meaning of the root and their widening semantic scope (Bybee’s “generality”). The much more fine-grained hierarchy of affixal positions proposed in the generative framework by Cinque () largely reflects the same observation. Moreover, in many languages affixes may admit variable order depending on their mutual scope, as in Adyghe () where the habilitive (‘can’) and similative (‘seem/pretend’) suffixes can be permutated in accordance with their mutual scope. ()

Adyghe (Korotkova and Lander : , ) a. waŝwe-m zˆ waʁe qə-tje-s-xə-ŝwə-ŝwe. sky- star --.-take-- ‘It seems that I can take a star from the sky.’ (similative > habilitive) b. waŝwe-m zˆ waʁe qə-tje-s-xə-ŝwe-ŝwə. sky- star --.-take-- ‘I can pretend as if I am taking a star from the sky.’ (habilitive > similative)

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



Table .. The Bininj Gun-Wok verb structure   

Tense Subject Object

         E

Directional Aspect Miscellaneous I Benefactive Miscellaneous II Generic incorporated nominal Body part incorporated nominal Numerospatial Comitative Embedded verb stem

 + + +

Stem Reflexive/Reciprocal Tense-Aspect-Mood Case

obligatory “pronominal zone”

optional zone

obligatory “conjugation zone”

Source: Evans (a: –).

Table .. Layered vs. template morphology Diagnostics

Layered morphology Template morphology

Zero morphemes (significant absence)

No

Yes

Zero derivation

Yes

No

Monodeterminacy (one root, one head)

Yes

No

Only adjacent morphemes may influence each other

Yes

No

Morphemes cannot be sensitive to more peripheral morphemes

Yes

No

Usually encodes at most one argument

Yes

No

Scope-determined position

Yes

No

Source: Stump (a: ); Bickel and Nichols (: )

However, in many other languages affixes occur in a rigid order hardly amenable to a transparent synchronic motivation in terms of scope; cf. Table . showing the organization of the verbal word in Bininj Gun-Wok (Gunwinyguan, Northern Australia). The widespread occurrence of conventionalized affix orders has led researchers to postulate two types of morphological organization referred to as “layered morphology” vs. “template morphology” (Simpson and Withgott ; Stump a; Bickel and Nichols : –; Good ). The prototypical differences between these are presented in Table .; see Stump (a) for more details and examples.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Both layered morphology and template morphology are idealized concepts rather than concrete language types, since most languages with complex morphology present a mixture of both kinds of ordering. Thus, in the abovementioned Adyghe, the suffixes appear to be organized in a layered system, while prefixes follow a more or less rigid template, cf. Korotkova and Lander (: ), with scope-based rearrangements being nevertheless possible for some prefixes as well, see Lander (: ). The question of ordering of morphological exponents is relevant not only for affixes and clitics (on the latter, see Simpson and Withgott ; Spencer and Luís a: –; for a description of a complex clitic system in an individual language, see e.g. Klamer  on Kambera), but for non-concatenative morphology as well. For instance, the non-linear morphology of Dinka is organized into a layered structure of successively applying operations, as shown in example (). ()

Dinka (Western Nilotic, Ethiopia, Andersen : ) root = plural lèc̰ ‘teeth’ root+singular lê̤ec voice quality shift, vowel lengthening root+singular+construct state  lê̤eɲ nasal replacement root+singular+construct state + lεˆ̤εɲ vowel lowering construct state 

Besides morpheme ordering, the worldwide distribution of prefixation vs. suffixation has received much attention. It is received wisdom that suffixes are more common crosslinguistically than prefixes (Dryer a), and explanations for this preference on the basis of psycholinguistics (Hall ; Hawkins and Cutler ) and prosody (Himmelmann ) have been proposed. It has also been shown that different morphological categories prefer suffixal exponence to differing degrees (cf. Bybee, Pagliuca, and Perkins ; Bakker and Siewierska ; Dryer b, c, d), which implies that choice of exponence is motivated not only by ease of processing. Another aspect which has gained prominence in typology relates to the quantification and cross-linguistic comparison of syntagmatic morphological complexity. Starting from the classic morpheme-to-word ratio proposed by Greenberg (), this field of inquiry has been extended by Nichols (, ), who considers such parameters as the sum of headmarking and dependent-marking constructions or the number of inflectional categories expressed in the verb (Bickel and Nichols b). Such an approach to morphological complexity is, however, fairly limited in that it disregards the paradigmatic aspects of morphology, to which we will now turn.

. P    

.................................................................................................................................. Morphological paradigms have been prominent in traditional and pedagogical grammar since Antiquity, and have become an object of theoretical inquiry in work such as Matthews () and Anderson (). These authors have advocated the so-called Word-and-Paradigm

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



models of morphology (see also the typologically oriented work of Plank  and the contributions to Plank ; for more details see Blevins ; Blevins, Ackerman, and Malouf, Chapter  this volume, and Stump, Chapter  this volume). Though paradigms are looked at with skepticism by some generative morphologists (cf. e.g. Bobaljik ), such phenomena as syncretism, suppletion, inflection classes, deponency, etc. cannot be neglected by any theory of morphology aiming at empirical and cross-linguistic adequacy (cf. e.g. Ackerman, Blevins, and Malouf  or Stump a). It is precisely the paradigmatic dimension of morphology, in particular such phenomena as “morphomic” (opaque) allomorphy and inflection classes (Aronoff ; Carstairs-McCarthy ), that has been called “autonomous morphology” (cf. Maiden ; Cruschina, Maiden, and Smith ). These features of morphology are claimed to be irreducible to other components of grammar (cf. Stump a) and to constitute one of the core domains of linguistic complexity (cf. Dahl ; Baerman, Brown, and Corbett b). The broad typological investigation of various aspects of paradigmatic morphology, in particular of deviations from the “canonical inflection” model, have been mainly carried out by the Surrey Morphology Group (see §.). Besides that, such work as Cysouw () on the paradigmatics of verbal person marking and Veselinova (, a, b) on verbal suppletion deserve attention. The latter work, based on a large cross-linguistic sample, shows that even such an apparently irregular phenomenon as suppletion is subject to systematic typological generalizations, promising fruitful insights in other related domains as well. Akin to the topic of suppletion is the study of stem alternations (Blevins ; Aronoff ; Spencer ). While this topic has received most attention in Romance linguistics (see first of all the work by Martin Maiden), it is certainly an important typological issue (Carstairs : ch. ; Stump : ch. , a: chs , ; Carstairs-McCarthy : ch. ). Bybee (: ) and Veselinova () have claimed that cross-linguistically suppletive stems tend to cut morphological paradigms along such major inflectional distinctions as singular vs. plural number, perfective vs. imperfective aspect, or past vs. non-past tense. On the other hand, the work by Maiden () and Carstairs-McCarthy () has suggested that even “morphomic” stem alternations (including suppletion), not associated with any coherent set of morphosyntactic properties, play an important role in grammars and are not fully arbitrary, as evidenced for example by their diachronic stability. Another currently prominent line of inquiry concerns inflection classes. Starting in the s with the question of the possible limits on the number of inflection classes (Carstairs , : chs , ; Carstairs-McCarthy , : ch. ), this field has substantially expanded its empirical database in the recent work by Blevins (), Stump (b), Stump and Finkel (), Finkel and Stump (), Baerman (, , ). In particular, it has been shown that the fairly restrictive principles of paradigmatic economy proposed by Carstairs-McCarthy (, ) seem to be violated by languages with exuberant inflection class systems like Nuer (Western Nilotic, South Sudan) or Seri (isolate, Mexico); cf. Table ., showing how just two Nuer affixes can create a large number of inflectional classes (only a small subset of actual Nuer declensions is shown in the table) when the distribution of these affixes is not tied to particular morphosyntactic values. A new line of analysis of inflection class systems, which seems very promising from both a theoretical and a typological perspective, applies the insights of information theory. This type of work asks the question about the mutual predictiveness of particular forms in the paradigm (e.g. the typology of “principal part” systems proposed by Finkel

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Table .. Some Nuer inflection classes

           

‘milk’

‘kind of tree’

‘potato’

‘hair’

cak caak caak ca̠k ca̠k ca̠k-ni

kε ̈c kε ̈c-ka ̈ kε ̈c-ka ̈ kεεc kεεc-ni kεεc

tac tac-kä tac tac-ni tac-ni tac-ni

nhim nhi̠m nhi̠m-kä nhiäm nhiäm-ni nhiäm-ni

Source: Baerman (: ).

and Stump ) and quantitatively compares inflection class systems in terms of entropy (Ackerman and Malouf ), taking into account such extramorphological parameters as type and token frequency of particular inflection classes. This line of inquiry requires a close collaboration between typologists, morphologists, and computational linguists (cf. Walther ). The entropy-based approach to morphological paradigms has also proven useful for the analysis of defectiveness, apparently an irregular quirk par excellence; see Sims () for a view of defectiveness as a phenomenon amenable to systematic generalizations. Alongside inflection classes, which constitute a prime example of lexically determined allomorphy, natural languages abound in phonologically and grammatically conditioned allomorphy of stems and affixes. Phonologically conditioned allomorphy is a relatively well-understood phenomenon; see for example Paster (), Nevins (). However, less is known about the types of grammatically conditioned allomorphy and the constraints on it, see for example Carstairs-McCarthy (), Bonet and Harbour (). In addition, it has been argued that allomorphy can be sensitive to the lexical semantics of the stem in principled ways. For instance, Aristar () has shown that longer allomorphs of case markers tend to appear on nominals whose inherent meaning is not directly compatible with the function of the case. This promising topic has not yet received the attention it deserves, though cf. Arkadiev () for a typological study of the allomorphy of ergative case. Last but not least, morphological entities are often polysemous or polyfunctional. Indeed, the polyfunctionality of inflectional (and, more marginally, derivational) elements has received most attention in linguistic typology, see Haspelmath () and Evans () for overviews,6 as well as numerous contributions to Rainer et al. () and Müller et al. (). Cross-linguistic investigations have discovered recurrent patterns of polysemy of many morphological categories (‘grams’) and some of these have been linked to diachronic paths of grammaticalization and semantic development (e.g. Bybee, Perkins, and Pagliuca ), thus revealing systematic correspondences between aspects of morphological form and linguistic meaning. In sum, the paradigmatic dimension in morphology, which has been prominent in traditional grammar but largely neglected in early morphological theorizing and crosslinguistic comparison, is currently enjoying a revival of interest from both theoretical 6

The most comprehensive typological overview of grammatical polysemy is perhaps Plungian (), existing only in Russian and in Croatian and Lithuanian translations.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



and typological perspectives. This multifaceted field of inquiry requires sophisticated methodology (including quantitative measures and computational modeling) and promises important insights into the structure and development of morphological systems and morphological complexity (cf. e.g. Nichols to appear).

. C

.................................................................................................................................. Despite some notable achievements, morphological typology is still in a state of development. In our view, the major challenge for both morphological theory and morphological typology is to find a good balance between analytic and conceptual depth, on the one hand, and breadth of empirical coverage, on the other. While most of the non-trivial theoretical insights in morphology are based on data from a limited set of languages (fortunately, also including non-European ones), large-scale cross-linguistic studies of morphology have rarely gone beyond somewhat superficial observations (Harris  being a notable and welcome exception). A balance between theory and typology can only be achieved by the joint efforts of typologists, theoreticians, and descriptive linguists. Morphological typology, morphological theory, and descriptive and documentary linguistics mutually enrich each other in many respects. If linguists describing individual languages are aware of the analytical notions, methodological insights, and problematic issues of current morphological theory and typology (such as the multidimensional rather than binary nature of traditional distinctions word vs. affix, inflection vs. derivation, or agglutination vs. flexion), they will produce more sophisticated and empirically adequate descriptions. In turn, such descriptions will feed both theory and typology. Advances in theoretical and typological research go hand in hand with new trends in descriptive and documentary linguistics. Current theorizing and cross-linguistic comparison require access not only to good grammatical descriptions, but also to dictionaries explicitly indicating such morphological information as inflection class membership, stem alternations and suppletion, or defectiveness. Theoreticians and typologists also need access to morphologically annotated corpora. With respect to this last point it should be mentioned that different types of morphological organization pose different problems for tasks like tokenization (linked to the definition of wordform), lemmatization (related to the inflection/derivation divide), and tagging, see e.g. Arkhangelskiy and Lander (). Their solution can only be reached through collaboration between theoreticians, computational linguists, and typologists. Morphological typology is indispensable for morphological theory, as typology is a testing ground for analytical models and hypotheses. Here the goals of the two enterprises—still conceived of by some as fundamentally distinct—largely converge. Morphology, which by its very nature is neither present in all languages nor cross-linguistically uniform, hardly admits overarching universal generalizations and much more readily provides answers to the “what’s where why” type of question (Bickel : ) usually asked by typologists. At the same time, theoretical conclusions can only be valid when they are based on an understanding of the kinds of morphology (including exponence, morphotactics, allomorphy, and paradigmatic structure) found in certain language families and linguistic areas, as well as on an account of the ways morphological systems diachronically develop

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

through inheritance or contact; cf. Gardani (), Johanson and Robbeets (eds. ), Gardani, Arkadiev, and Amiridze (). Morphological theory needs morphological typology just like typology profits from theory, while good morphological descriptions have to be cross-linguistically and theoretically informed.

A We are grateful to Jenny Audring, Geert Booij, Francesca Masini, Gabriele Schwiertz, and an anonymous reviewer for comments and corrections. All faults and shortcoming are ours.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

                ......................................................................................................................

 . ´ı

. G   

.................................................................................................................................. T assumption that creoles are morphologically poorer when compared to other languages has been deeply rooted in the literature. Seuren and Wekker (: ), for example, claim that morphology is ‘essentially alien to creole languages’ and for Thomason (: ) creole languages ‘either lack morphology entirely or have very limited morphological resources compared with those of the superstrate and other input languages’. This chapter, therefore, will come as a surprise to all those who have grown familiar with the idea that creoles have either very little morphology or none at all. A radical change in attitude towards creole morphology has taken place during the last two decades, based on solidly grounded and robust empirical evidence (e.g. Kihm ; Plag ; Siegel a, b; Luís , and references therein). Within the domain of word-formation, it has been firmly established that not only is morphology not absent but also that large proportions of the creole lexicon consist of morphologically complex words that do not necessarily express transparent and predictable semantics. With respect to inflection, the observed similarities between creoles and non-creoles have revealed that, despite the reduced size of paradigms, creoles developed forms of morphosyntactic expression (including cumulative, syncretic, and morphomic exponence) that are typical of inflecting languages. Current research, therefore, has been paying more attention to the nature and typology of morphological patterns. Ultimately, the acknowledgement that creoles exhibit morphological phenomena that are not necessarily exclusive to creoles has provided the much-needed empirical basis for formal generalizations about creole word structure. Throughout this chapter, we explore the interaction between creole morphology and morphological theory by drawing on empirical evidence which illustrates that morphological similarities exist between creoles and non-creoles. Such evidence shows that morphological

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

patterns may be used for the creation of new lexemes (through word-formation), that morphosyntactic features may be mapped onto existing lexemes (by means of inflection), or that derived words in creoles may be semantically non-compositional while inflected words may exhibit form–meaning mismatches and be part of non-predictable paradigms. Conceptually, the morphological evidence will be used to claim that creole word structure is just as principled as the morphology of non-creole languages, and that it can be naturally accounted for by applying the same formal apparatus that is used for the analysis of noncreole languages. The structure of the chapter is as follows. Central issues about creole genesis and the development of creole morphology are surveyed in §.. An overview of creole wordformation and inflection is provided in the remainder of the chapter (§§., ., and .) based on a carefully selected range of morphological phenomena drawn from reduplication, derivation, and inflection. In §., on full reduplication, we focus on morphological patterns that are typically associated with creole languages and which, unlike any other, are virtually absent from the superstrate languages. Section . turns to derivational morphology and examines the extent to which meaning and form have been taken over from the superstrate or the substrate languages. Section ., then, deals with inflectional morphology, known to be rare in creoles and perhaps therefore less investigated. Each one of the three sections surveys defining properties of the phenomena and briefly illustrates a formal account which is cast within a current morphological theory.1 We conclude our chapter with a short summary in §..

. I    

..................................................................................................................................

.. The genesis of creole languages One of the defining properties of creole languages is their sociolinguistic origin: the contact between the socially dominant language of the settlers and the disempowered languages of the dominated groups which gave rise to heterogeneous speech-communities that lacked a common native language. Another widely accepted property of creoles is that they have derived from rudimentary adult L varieties (or pidginized interlanguages), during the early stages of the language contact for the purpose of interethnic communication (Siegel a).2 Creole genesis, thus, involves two successive stages of development: first, adults develop a rudimentary vehicle for communication; and then a new generation of first language learners acquires this rudimentary language as their native language. Although it is not clear how long

1 Within the spirit of the chapter, we have refrained from adopting only one morphological theory. We propose analyses within Construction Morphology (Booij a) for full reduplication (§.) and derivation (§.), and within Paradigm Function Morphology (Stump , a) for inflection (in §.). 2 While some of these adult L2 varieties developed stable grammars and preserved their status as lingua franca (Siegel ), many ceased to exist once they served as input to creole formation.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



the transition from a pidginized L to a fully fledged creole may take (given that some creoles have taken longer to develop than others), it has become increasingly evident that pidginized interlanguages served as input to creole formation. During the second stage of creole genesis, processes of first language acquisition seem to play a role in shaping creole grammar, especially by triggering the expansion of the reduced grammar of the pidginized adult L variety (Veenstra ; Siegel b; Plag ).3 Geographically, creoles are mostly spoken in coastal areas or islands located in the South Atlantic, South Pacific, Indian Ocean, and the Caribbean Sea, as a result of the establishment of plantations (with slave or indentured labour) that provided the context for the contact between geographically very distant communities and their languages. While some creoles are already extinct (e.g. Berbice Dutch or Cochin Indo-Portuguese) and many others may be on the verge of extinction, there are others which are dominant native languages (e.g. Kabuverdianu, Guinea-Bissau Kriyol).

.. The bias against creole word structure The debate about the nature of creole morphology is primarily rooted in the observation that there are perceptible differences between the morphology of creoles and the morphology of their superstrates (i.e. the language of the colonial power). Indeed, when compared to their superstrates, it has been repeatedly noted that creoles have less bound inflectional morphology, that is, fewer affixes per word (one affix, rarely two), fewer inflected forms per paradigm, and fewer morphosyntactic categories expressed through inflection (Kihm ; Luís ; Plag ). As Siegel notes, ‘it is the comparative absence of verbal inflection in creoles lexified by European languages that has given the impression that creoles are poorer and simpler than other languages’ (Siegel : ). While creoles have significantly downsized and simplified the inflectional paradigms of their superstrates, evidence has abundantly shown that creole morphology in general (including not only inflection, but also wordformation) is not necessarily poorer or simpler. In the realm of word-formation, for example, a large proportion of the creole lexicon consists of morphologically complex words (Lefebvre ) and lexeme formation has been shown to be creative and productive (Plag ). One crucial linguistic property of creoles is the fact that most of the lexicon and morphological formants are derived from the superstrate languages (Lefebvre ), while grammar has been taken over from the substrate languages (i.e. the language of the dominated groups). As will become clear throughout this chapter, the meanings or functions of creole words and affixes, however, are not necessarily attested in the superstrates. Both within word-formation and inflection, not all features encoded by superstrates are encoded by creoles, and features that are encoded by creoles are not necessarily encoded by superstrates (Siegel ). In effect, creoles may have inherited morphological distinctions from their substrate languages, which may easily go unnoticed to the untrained eye.

3

The debate about creole genesis has led to a number of different theories. For an overview, see Holm () and, more recently, Lefebvre ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

An accurate understanding of the full potential of creole morphology, therefore, should rely on an accurate assessment of the full contribution of the substrate languages. In the remainder of this chapter, we survey a selected range of morphological phenomena which show that there are numerous morphological commonalities between creoles and non-creoles that provide a common ground for morphological analysis.

. F 

.................................................................................................................................. One of the word-formation processes most typically associated with creoles is known as full reduplication and involves the doubling of a full morphological base.4 One common misconception about the origins of full reduplication is that it developed for immediate communication among adult L speakers with a rudimentary knowledge of the target language (Holm ); this idea is contradicted by the fact that full reduplication is ‘virtually absent in pidgins’ (Bakker : ). Equally problematic is the claim that full reduplication developed under the influence of the substrate languages. Although full reduplication is quite predominant in most of the substrate languages, closer analysis has revealed that not all creoles have developed reduplication and also that some creoles have developed reduplication patterns that are not present in their substrate languages (contributions to Kouwenberg ; Kouwenberg and LaCharité ). Despite a somewhat unclear genesis, full reduplication nonetheless has received much attention within the creole literature. In §.., we survey some of the defining semantic and morphological properties of full reduplication by drawing on a selection of the literature. In §.., we then illustrate how previous research has attempted to account for full reduplication within current morphological theory.

.. Survey ... Meaning and iconicity Full reduplication is generally associated with meanings that have a high degree of iconicity, including intensity, repetition, and continuity (Kouwenberg ). A closer look, however, reveals that iconicity is not a straightforward concept and that different degrees of iconicity exist, depending on various linguistic factors. At some level of generalization, Kouwenberg and LaCharité (: ) observe that iconicity is dependent on the word class of the base: if the base is a verb, then reduplication expresses continuity or repetition (cf. Berbice Dutch in (a)); if the base is a noun, it expresses plurality (cf. Jamaican in (b)); and if the base is an adjective, it expresses intensity (cf. Papiamentu in (c)).5

4 For reasons of space, this section will only address full reduplication. For evidence on partial reduplication, see contributions in Kouwenberg () and references therein. 5 Throughout this chapter, the following abbreviations will be used to identifiy the creole languages from which the data has been drawn: BD = Berbice Dutch, Ch = Chabacano, Fg = Fongbe, H = Haitian, Ha = Hawai‘i Creole, Jm = Jamaican, K = Kriyol, KP = Korlai Indo-Portuguese, Kv = Kabuverdianu, P = Portuguese, Pp = Papiamentu, Sr = Early Sranan.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     ()

a. wengi ‘walk’ / wengi-wengi ‘walk up and down’ b. boʃ i ‘bundle’ / boʃ i-boʃ i ‘separate bundles’ c. kayente ‘hot’ / kayente-kayente ‘really hot’

 (BD) (Jm) (Pp)

Within a given word class, the meaning of full reduplication may be further determined by the semantics of the base. Non-punctual verbs, as in (a), express continuity, while punctual verbs, as in (), express an iterative reading (Kouwenberg and LaCharité : ). ()

tiif ‘steal’/ tiif-tiif ‘steal repeatedly’

(Jm)

Within the class of adjectives, less semantically predictable patterns may be illustrated. Whereas in (c), the reduplicated adjective expresses a purely iconic reading, in (a–b) reduplicated adjectives convey attenuative or diminutive readings (of the kind ‘less than fully X’ or ‘not entirely X’) (Kouwenberg and LaCharité : ). ()

a. blanch ‘white’ / blanch-blanch ‘whitish’ b. yala ‘yellow’ / yalayala ‘yellowish’

(H) (Jm)

More obvious departures from the iconic form–meaning relationship take place with category-changing reduplication. The examples in () illustrate less transparent and less predictable interpretations that can be assigned to deverbal adjectives: participial in (a), augmentative in (b), and diminutive in (c) (Kouwenberg and LaCharité : ). ()

a. kata ‘scatter’ / kata-kata ‘scattered’ b. kot ‘cut’ / koti-koti ‘much cut; shredded’ c. kotti ‘cut’ / kotti-kotti ‘in small pieces’

(Jm) (Jm) (Sr)

Further examples of category-changing reduplication are illustrated in (), with deverbal nouns that convey a non-iconic semantics: result (a), instrument (b), and cause (c). In these cases, even though there is a semantic relation between the base and the corresponding reduplication, the deverbal nouns are no longer productive and should best be regarded as lexicalized forms (Kouwenberg and LaCharité : ). ()

a. bali ‘shout’ / bali-bali ‘noise’ b. doro ‘sieve’ / doro-doro ‘sieve, sifter’ c. ich ‘to itch’ / ich-ich ‘dry rash’

(Sr) (Sr) (Jm)

... Morphological form and behaviour The doubling of identical parts raises an important methodological question, namely whether the two parts give rise to one complex word (i.e. morphological reduplication) or whether they form a string comprising two identical words (i.e. syntactic reiteration).6

6

For convenience, we are here assuming a very simplified distinction between morphological and syntactic iteration.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

Applying straightforward semantic, morphosyntactic, phonological, and morphological criteria may help to clarify this question. Semantically, for example, if a doubled sequence expresses non-iconic meaning, that should indicate that it constitutes an independent lexeme which has been derived through word-formation rules (see reduplicated adjectives in Haitian and Jamaican in ()). Morphosyntactically, the ability to change the category of the input provides evidence that full reduplication constitutes a word-formation process (see category-changing reduplication in ()–()). Phonological evidence (i.e. lexical stress, stress shift, and tone) can also be used to determine whether full reduplication is word-formation or syntactic iteration. For example, in syntactic iteration, each iterated word must carry primary lexical stress given that each word functions as an autonomous prosodic word; in morphological reduplication, by contrast, lexical stress is assigned to the whole reduplicated sequence. In (), lexical stress falls on the penultimate syllable of the deadjectival nouns (Dijkhoff : –). ()

finifíni (lit. ‘fine-fine’) ‘hairy parts of a cactus’ mòlimòli (lit. ‘soft-soft’) ‘plant with small soft fruit’

(Pp) (Pp)

Stress shift can indicate whether a given reduplicated sequence constitutes an independent word form or a syntactic sequence. In Kriyol, the derivation of deverbal nouns through conversion of reduplicated verbs triggers lexical stress shift: while reduplicated verbs are stressed on the last syllable as in (a), in deverbal nouns lexical stress falls on the first syllable of the second base as in (b) (Luís : ). ()

a. sukundi sukundí ‘hide habitually’ b. sukundi súkundi ‘hide and seek (the game)’

(K) (K)

Phonological evidence can also be provided by tone, as in the English-based creole Pichi, where morphological reduplication involves tone deletion and is restricted to specific word classes, whereas syntactic iteration involves no tonal change and is open to all word classes (Yakpo ). In addition to semantic, morphosyntactic, and phonological criteria, we can also use morphological diagnostics. A given reduplicated sequence has morphological status if it can serve as input to other morphological rules. This ability has been observed in Kriyol, where reduplicated verbs can undergo conversion and inflection. In () above, the reduplicated verb is converted into a reduplicated noun, triggering lexical stress shift from the last syllable of the input base to the first syllable of the second base. An example of a reduplicated word serving as input to suffixation is given in (), where the reduplicated verb form takes the passive marker ‑du (Luís : ). ()

a. suta sutá ‘attack repeatedly’ b. suta sutá-du ‘was/were attacked repeatedly’

(K) (K)

The interaction between reduplication and other morphological operations can also be observed in Berbice Dutch, with evidence from derivation and inflection. More specifically, in (a) the reduplicated adjective is transformed into a noun by means of the nominalizing suffix ‑jε (Kouwenberg : ), and in (b) an inflectional perfect suffix ‑tε is attached to a reduplicated verb (Kouwenberg : ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     ()



a. kal-kal-jε (< kali-kali-jε) small-small- ‘small small one’

(BD)

b. kop-kop-tε (< kopu-kopu-tε) buy-buy- ‘bought [land] in patches’

(BD)

.. Analysing full reduplication Even though full reduplication can be an instance of syntactic reiteration, the examples given in §.. have shown undoubtedly that many instances of full reduplication result in the creation of new lexemes in creoles. In this section, we briefly illustrate how the form and meaning of full reduplication may be analysed as lexeme formation. We will draw on the proposal made by Booij (a) for full reduplication in Afrikaans within Construction Morphology (see also Masini and Audring, Chapter  this volume) and show that it not only accounts for full reduplication in creoles but also captures the lexemic similarities between full reduplication and other word-formation processes. The insight that full reduplication constitutes the doubling of a full morphological base may be captured by co-indexing the base and the reduplicant, as shown in the general meta-schema in () (Booij a: ). ()

[[X]Xi [X]Xi]Xj $ [REDi]j

Example () also accounts for the holistic semantics of full reduplication through a variable j which associates the overall meaning of the construction (on the right side of the double arrow) with the reduplication pattern (on the left side). It follows from this coindexation that the meaning of the new lexeme cannot be computed from the meaning of its parts but must be specified as a holistic property of the whole reduplicated word. The observation that full reduplication can either be category-preserving or categorychanging is captured in () by instantiating the variables X with the word class of both the base and the derived word (cf. ()). From the application of this schema, a reduplicated noun is derived through the doubling of a nominal base (cf. (a)), whereas a reduplicated adjective is derived through the doubling of a verb (cf. (b)). ()

a. [ [N]Ni [N]Ni]Nj $ [REDplurality i]j (cf. (b)) b. [ [V]Vi [V]Vi]j $ [REDdiminutive i]j (cf. (c))

For a detailed representation of the semantics of full reduplication, subschemas may be needed that express the different degrees of semantic iconicity and, perhaps ideally, also capture the semantic dependencies that may exist between the different interpretations (cf. Luís : ). By contrast, non-productive patterns with lexicalized meanings might best be captured by formulating independent representations, as in (). ()

[[sukundi]Vi [súkundi]Vi ]Nj $ [hide and seek]j (cf. (b))

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

The property of reduplicated words to undergo word-formation or inflection, mentioned in §.. above, can also be formalized by using Booij’s (a) schemas. For example, in Guinea-Bissau Kriyol, as shown in (), reduplicated verbs can passivize. To capture the interaction between doubling and suffixation, Luís () proposes the schema unification in (), adopting the insights from Booij (a: –). ()

[[ [V]i [V]i]Vj du]Vk $ [PASS [ REDiteration i]j ]k (cf. (b))

In (), then, the morphological description on the left-hand side of the arrow indicates that the doubled verb undergoes suffixation by attaching the passive maker ‑du, while the semantic description on the right-hand side captures the scope of PASSk over REDiteration j. In sum, the evidence provided in our brief overview has shown that full reduplication in creoles behaves like genuine lexeme formation which follows from general principles of word-formation.7

. D

.................................................................................................................................. Our main goal in this section will be to survey the nature and typology of derivational affixes in creoles (§..) and discuss some of the challenges posed by creole derivation to morphological theory (§..).8 Before that, however, one caveat is in order. Unlike full reduplication, derivational morphology is quite abundant in the European superstrate languages. Creoles have massively borrowed lexemes and affixes from their superstrate languages and, as a result, it is not uncommon to find that a given base–affix combination exists both in the creoles and in the superstrates. Such resemblance, however, may just be phonological. As will be shown in this section, creole affixes may develop creole-specific meanings and functions, and may sometimes simply lexicalize and lose their morphological status. It is therefore essential to determine whether a given affix is effectively native to a creole. Section .., therefore, will start with a short summary of some of the diagnostics that may be used to identify the internal structure of derived words in creoles.

.. Survey Diagnostics have been developed by Brousseau, Filipovich, and Lefebvre () and Lefebvre (: –) to determine whether a given affix (of superstrate origin) is part of creole morphology. Following their insights, native affixes must ideally exhibit one of the following properties: (a) attachment to roots that are not of superstrate origin; (b) selection of superstrate bases that do not combine with the affix in the superstrate language; (c) placement in a position that is different from the position of the corresponding affix

7 Other theoretical accounts of full reduplication would have been possible. See Inkelas and Zoll () on partial and full reduplication, and Botha () on reduplication in Afrikaans. 8 Compounding and conversion are very frequent as well, but will be left unaddressed in this chapter for reasons of space. See Brousseau, Filipovich, and Lefebvre (), Dijkhoff (), DeGraff (), Plag (), Braun and Plag (), Lefebvre (), Steinkrüger (), Braun ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



in the superstrate; and (d) ability to derive words whose semantic and syntactic properties are different from the corresponding words in the superstrate.

... Superstrate affixes Derivational affixes of superstrate origin are illustrated in ()–(), with evidence from Haitian (Lefebvre , ). In (), the suffix ‑è (from the French agentive suffix ‑eur, as in travaill-eur ‘worker’) derives agentive nouns from dynamic verbs; in (), the suffix ‑e (from the French suffix ‑er [e], as in fêt-er ‘to celebrate’) derives denominal verbs; and in (), the suffix ‑man (from French suffix ‑ment [mã], as in admirable-ment ‘admirably’) derives adverbs from adjectival bases (Lefebvre : –). ()

konsey-è ‘counsellor’ (< konseye ‘to counsel’)

(H)

()

betiz-e ‘to talk nonsense’ (< betiz ‘nonsense’)

(H)

()

a. fin-man ‘completely’ (< fin ‘complete’) b. menm-man ‘equally, the same’ (< menm ‘same’)

(H) (H)

Crucially, these base–affix combinations either do not exist in French or, if they do, they have a different meaning (Lefebvre ). A further example of a base–affix combination that does not exist in the lexifier is seen in (a). In Haitian, the attributive suffix ‑è (from French ‑eur, as in farc-eur ‘someone who makes jokes’) may attach to bases of French origin, such as skandal ‘scandal’. However, as Lefebvre (: ) observes, there is no derived word in French corresponding to Haitian eskandal-è. But there is an identical word in Fongbe, as shown in (b), where the affix’s behaviour seems to be modelled on the substrate. ()

a. eskandal-è ‘loud, rowdy’ b. zigidi-no ‘loud, rowdy’

(H) (Fg)

In creoles with more than one superstrate, a given affix may attach to bases from different superstrates. In Papiamentu, Spanish-based affixes may also attach to bases borrowed from English and Dutch (Dijkhoff : –). In (), the nominal suffix ‑ero, which is used to derive human nouns, attaches to bases such as blek- (from Dutch ‘tin’) and shap- (from English ‘shop’). In (), the suffix ‑eria, which is used to derive places, attaches to buk(from Dutch ‘book’) and feks- (from English ‘fix’).9 ()

a. blek-ero ‘blacksmith’ b. shap-ero ‘shop owner’

(Pp) (Pp)

()

a. buk-eria ‘bookshop’ b. feks-eria ‘shoe-repair shop’

(Pp) (Pp)

9 Although very little is known about the diachrony of creole morphology, one reviewer points out that this kind of morphology may have been a late addition, so that in the earlier stages creole speech was far less complex morphologically speaking.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

... Morphologized free words It is not uncommon for derivational affixes to derive from the morphologization of free (superstrate) words. The derivational morphology of Early Sranan developed solely by converting English words into affixes (Braun : –). For example, the agentive suffix in (), rather than being derived from the English agentive suffix ‑er, developed from the English-derived noun man. ()

a. djari-man ‘gardener’ (< djari ‘garden’) b. hondi-man ‘hunter’ (< hondi ‘hunt’)

One difficulty posed by Early Sranan stems from the fact that ‘man’ in () is also used as a free word (Braun : –). Diagnostics (of the type that are used for non-creole languages, see Booij a; Lieber and Štekauer b, ) will be needed to determine whether a given item is bound or free. ()

mi habi man Isg have husband ‘I have a husband.’

(Sr)

Early Sranan also developed a diminutive marker from the Portuguese adjective pequeno ‘small’. As shown in Braun (: –), the prefix pikíen-/pikin- attaches to a nominal base and can mean either ‘young N’ (cf. (a)), ‘small N’ (cf. (b)), or ‘inexperienced’ (cf. (c)). ()

a. pikíen-hagoe ‘piglet’ (< hagoe ‘pig’) b. pikin-spûn ‘teaspoon’ (< spûn ‘spoon’) c. pikíen-datra ‘junior doctor’ (< datra ‘doctor’)

(Sr) (Sr) (Sr)

... Affixes of substrate origin Creole affixes may also be borrowed from the substrate languages and combine productively with superstrate bases. In Chabacano (a Spanish-based creole), affixes of Hiligaynon origin attach productively to Spanish words. In (), the ordinal prefix ika- attaches to the Spanish numerals dos ‘two’ and sinko ‘five’, and, in (), the prefix ma- attaches to Spanish nouns to derive attributive nouns (Steinkrüger : ). (See also (), for Berbice Dutch affixes of substrate origin.) ()

a. ika-dos ‘second’ (< dos ‘two’) b. ika-sinko ‘fifth’ (< sinko ‘five’)

() ma-pyédra ‘stony, full of stones’ (< pyédra ‘stone’)

(Ch) (Ch) (Ch)

... Semantic opacity In analogy to non-creoles, affixes in creole languages do not necessarily express transparent meanings. This means that the whole meaning of the lexeme may not necessarily be

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



predictable from the meaning of its component parts. This may be illustrated with evidence from Early Sranan. In () and (), the forms with ‑man are phonologically identical but semantically unrelated. As Braun and Plag (: ) observe, the derived noun in () has two meanings: ‘thief ’ and ‘trigger’. Whereas the former is semantically transparent, the latter illustrates an opaque relation between the affix and the base. ()

fu(r)furman ‘thief/trigger’

()

fu(r)fur ‘steal’

Another case of opacity is provided by the Haitian examples in () and (). Whereas the productive prefix de- expresses inversive/privative meaning, as shown in (), not all words showing this prefix convey this meaning. As Lefebvre (: –) observes, the word debaba ‘to mow the lawn/to weed’ in () lacks a free root: the verb *baba does not exist and the noun baba meaning ‘idiot’ is not related semantically to debaba. Another instance in which the prefix de- does not seem to carry its typical inversive/privative meaning is illustrated in (), where a verb with or without de- has the same privative interpretation. ()

a. deboutonnen ‘to unbutton’ (< boutonnen ‘to button’) b. defòme ‘to deform’ (< fòme ‘to form’)

(H) (H)

()

debaba ‘to mow/to weed’ (< v. *baba)

(H)

() deplimen / plimen ‘to pluck’ (< n. plim ‘feather/body hair/fur’)

(H)

We will briefly return to semantic transparency in §.., where we consider the theoretical implications of opacity/transparency in word-formation.

.. Analysing derivation The morphological analysis of creole derivation should be aimed primarily at capturing morphological generalizations underlying both their form and meaning. While much effort has been made to diagnose and characterize the meaning and distribution of derivational affixes and patterns, somewhat less attention has been paid to their morphological analysis. However, as this section aims to show, the questions raised by the derivational morphology of creoles are not dissimilar from those raised by non-creole languages. Among the common concerns about derivational patterns are internal word structure, identity of the base, and the meaning of affixes (i.e. whether they are transparent, opaque, polysemic, semantically extended). In this section, we will attempt to illustrate how some of these aspects may be accounted for. In line with §.., we will illustrate our formal account of creole derivation within the word-based theory of Construction Morphology (Booij a; Masini and Audring, Chapter  this volume). Naturally, other theories might be adopted; however, for reasons of space we will restrict our illustration to this one only.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

Assuming that affixes are not lexical items themselves, we formulate the general metaschemas in (a) and (b), for suffixation and prefixation, respectively. ()

a. [ [X]Xi Y]Yj $ [SEMi]j b. [ Y [X]Xi ]Yj $ [SEMi]j

For the analysis of suffixation, we propose the partially instantiated schema in (). This schema accounts for the derivation of agentive deverbal nouns in Haitian through the suffixation of the agentive suffix ‑è. The derivation is also represented as a categorychanging process, in which the suffix takes a verb as input and derives a noun. ()

[ [X]Vi è ]Nj $ [person who SEMi]j

In analogy to the full reduplication schemas in §.., the double arrow in () associates the morphological forms (on the left-hand side) with the overall meaning of the derived lexeme (on the right-hand side). This mapping between form and meaning offers a principled representation of the systematic regularities that are common to the paradigm of agentive nouns in Haitian, shown in (). ()

(Lefebvre : ) a. konsey-è ‘counsellor’ (from konseye ‘to counsel’) b. rans-è ‘joker’ (from ranse ‘joke’) c. vant-è ‘braggart’ (from vante ‘to brag’)

(H) (H) (H)

Turning now to prefixation, we propose the derivation of inversive verbs in Haitian, given in (). This schema captures the insight that the prefix de- derives a verb without changing the category of the input. ()

[de- [X]Vi ]Vj $ [inverse of SEMi]j

For derivational affixes with more than one meaning (such as the diminutive marker in Early Sranan, in ()), more than one schema may be needed to capture the fact that the same formal pattern can express different meanings. In (), then, each schema is associated with a distinct semantic output. Despite the difference in meaning, the three schemas also express common properties, such as the fact that the diminutive prefix does not change the category of the nominal base to which it attaches. () a. [pikin - [X]Ni ]Nj $ [SEM young Xi]j (cf. a) b. [pikin - [X]Ni ]Nj $ [SEM small Xi]j (cf. b) c. [pikin - [X]Ni ]Nj $ [SEM inexperienced Xi]j (cf. c) In (), the schemas are analysed as semantically independent patterns. However, given the well-known polysemous nature of diminutives crosslinguistically (Heine, Claudi, and Hünnemeyer ), it might be relevant to explore an alternative analysis in which

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



these patterns are related to each other through semantic inheritance relations (see Booij a: ).10 Finally, creole affixes with more conventionalized meanings may be accounted for by mapping derivational form onto non-compositional semantics. For the Haitian nouns in (), the schemas capture the insight that the diminutive prefix ti- no longer contributes to the overall meaning of the derived word, historically derived from forms meaning ‘smalldonkey’ (a) and ‘small-girl’ (b). ()

a. [ ti- [bourik]Ni ]Nj $ [rude person]j (bourik ‘donkey’) b. [ ti- [fi]Ni ]Nj $ [virgin]j (fi ‘girl’)

(H)

Overall, then, our survey has shown that the form and meaning of creole derivation can be naturally accommodated within morphological theory, reinforcing the view that the derivation in creoles and non-creoles is ‘synchronically indistinguishable’ (Braun and Plag ).

. I

.................................................................................................................................. Unlike word-formation, inflection is much less abundant in creoles. Research on the underlying causes of such inflectional scarcity suggests that the loss of inflectional morphology may have been determined by untutored adult L acquisition and the fact that inflection develops in the adult L grammar at a later stage of the learning process (Klein and Perdue ; Plag ; Siegel ; Archibald and Libben, Chapter  this volume). Whether adults proceed to such later stages of their L grammars may be determined by a combination of other factors, including, for example, access to the target language and their motivation to become more proficient in this language (Siegel a).11 Effectively, much of the literature on creole inflection has been centred on the forces—linguistic and socio-cultural—that may have determined the reduced number of inflectional markers and the small size of inflectional paradigms (McWhorter , ). Less attention has been paid to the inflectional properties of the data itself and fewer inflectional accounts have been produced, largely as a result of the implicit assumption that creole inflection may be too simple (i.e. transparent and concatenative) to merit discussion. Over the last decade, however, much progress has been made to contradict this assumption, with evidence showing that creoles may also exhibit inflectional patterns of the kind that are typically found in inflecting languages, such as allomorphy, fusion, syncretism, inflection classes, among others (see contributions in the following volumes: Bhatt and Plag ; Luís ). 10

For example, as noted by one of the editors, one polysemous schema (associated with ‘diminishing’ in general) could be instantiated by three sub-schemas, each with a more specific semantics related to ‘diminishing’. 11 In the development of Indo-Portuguese creoles, for example, socio-cultural factors seem to have played an important role in the survival of inflectional morphology. Conversion to Christianity may partly explain why Indian servants, who had been cut off from the Hindu community, developed a more elaborate L variety of Portuguese (Luís ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

In what follows, we address the origins of inflectional affixes and identify some of the paths along which they seem to have developed (§..). We then survey patterns of inflectional morphology by illustrating the different kinds of form–meaning relations that have been documented (§..). A formal account of creole inflection within Paradigm Function Morphology is sketched in §...

.. Origins The form of inflectional affixes is mostly of superstrate origin, but their morphosyntax and distribution is quite often creole-specific. As we will see next, the causes triggering such changes may be varied. In some cases, the meaning of creole inflectional markers may have been affected by the transfer of the properties of substrate affixes onto superstrate forms. For example, the progressive suffix ‑ing in Hawai‘i Creole shares its phonological form with the English verbal inflection ‑ing (from which it derived). However, possibly as a result of the influence of Cantonese, it can only be used to express progressive meaning, as in (a). Sentences in which the inflected verb is used to describe future events, as in (b), are ungrammatical in the creole (Siegel : ). ()

a. He helping me now. b. *He helping me tomorrow.

(Ha)

One other factor triggering the morphosyntactic differences between superstrate affixes and creole affixes may be the tendency for creoles to take over inherent verbal inflection from their contributing languages but discard contextual verbal inflection (Kihm : –; Luís ).12 Verb forms in Korlai Indo-Portuguese nicely illustrate the loss of contextual verbal inflection: while past verb forms have been taken over from Portuguese third singular past forms (cf. ()), the creole has only preserved the tense value, not the agreement value (cf. ()).13 ()

cant-ou [ow] ‘sing-past.sg’

()

kat-o [o] ‘sing-past’

(P) (KP)

Another change may result from the fact that some superstrate affixes have simply lexicalized. Theme vowels in Portuguese-based creoles constitute an exemplary case of lexicalization: although all creoles have retained the theme vowel, only very few have actually retained conjugation class distinctions (Luís ). In Kriyol (spoken in Guinea-Bissau), 12 The distinction between inherent inflection and contextual inflection was originally proposed by Booij (), who argues that contextual inflection (e.g. case) depends on syntactic context while inherent inflection (e.g. tense) does not. We used it here as a descriptive label. For a recent critical discussion of this distinction, see Spencer (: ). 13 The long vs. short distinction in Mauritian Creole (see () below) constitutes one notable exception.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



verb-final ‘theme vowels’, as in (), have lost their inflectional status and are therefore merely part of the phonology of the root (Kihm : ). (See, however, Korlai IndoPortuguese () below.) Whether a given stem–affix combination is effectively native to the creole can only be determined on a language-specific basis. ()

a. pista ‘lend’ (from Portuguese emprestar) b. kume ‘eat’ (from Portuguese comer) c. durmi ‘sleep’ (from Portuguese dormir)

(K) (K) (K)

Creoles may also derive their inflectional affixes from the substrate languages. Admittedly, however, very few cases are known. In Berbice Dutch, for example, the verbal suffixes ‑tε ‘perfective’ and ‑arε ‘imperfective’, on the one hand, and the nominal suffix ‑apu ‘plural’, on the other, have been borrowed from Eastern Ijo, the creole’s substrate language. As mentioned above for superstrate inflections, not all inflections of substrate origin preserve the meanings and functions of the corresponding substrate suffixes (Kouwenberg : ). In the case of Berbice Dutch, perfective and imperfective suffixes do preserve the meanings/ functions of their etyma, but show differences in their interaction with other aspects of the grammar (negation, interaction with other TMA material). In contrast, the plural suffix ‑apu diverges considerably from the substrate form, having lost both its semantics and its syntactic status (Silvia Kouwenberg, p.c.). One other possible source of creole inflection may be the morphologization of free words. An example of an inflectional affix that derived from a free superstrate morpheme can be found in Tok Pisin, where the transitive suffix ‑im results from the reanalysis of the English pronouns ‘him/them’ and is modelled on the parallel transitive suffixes found in the substrate languages (i.e. Tolai or Kuanua). In this case, then, the form of the transitive suffix is derived from the superstrate, whereas the meaning and function have a substratal origin (Siegel : ). The development of words to affixes can also take place after creolization through the morphological integration of clitics. While the development of inflectional morphology from unstressed bound elements is not infrequent in language, determining exactly whether bound elements (or clitics) are behaving like a full-fledged affix is not always a straightforward task (Spencer and Luís a, b), which may explain why little attention has been given in the creole literature to clitic morphology. Nonetheless, there is evidence suggesting that syntactically and phonologically dependent elements also develop affixal properties. In Tok Pisin, for example, the preverbal habitual marker save, which derived from English savvy ‘know’ (via Portuguese), has been gradually developing from free word into a bound marker. Evidence that it is behaving more like a verbal affix is the fact that it has been phonologically reduced to sa and is no longer stressed (Siegel : ). Also, in Jamaican, the progressive marker de developed selectional requirements and can no longer be separated from the verb (Farquharson : ):14

14 TMA markers in many creoles, however, have not developed into affixes and therefore retain their status as syntactic words, some of which may be separated from the verb by adverbs, and bear lexical stress or tone (e.g. see Veenstra , on Saramaccan).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

() a. Di piipl-dem aalwiez de baal.  people- always  bawl ‘The people are always crying.’

(Jm)

b. *Di piipl-dem de aalwiez baal.  people-  always bawl

(Jm)

Creole inflection may also extend paradigmatic structure, by increasing the number of conjugation classes. Indo-Portuguese creoles (spoken in Korlai, Daman, and Diu) retained all three Portuguese conjugation classes (Clements ; Luís ; Cardoso ), but Korlai Indo-Portuguese developed in addition a fourth conjugation class for the integration of loan verbs of Marathi origin (Clements and Luís ): ()

Korlai Indo-Portuguese (Clements and Luís ) kata ‘sing’ bebe ‘drink’ irgi ‘get up’ loʈu ‘push’         Base

kata

bebe

irgi

loʈu

Past

kato

bebew

irgiw

loʈu

Gerund

katan

beben

irgin

loʈun

Completive

katad

bebid

irgid

loʈud

Another interesting creole-internal innovation, which developed in some French-based creoles (Louisiana Creole (Neumann ; Rottet ) and Mauritian Creole (Baker ; Veenstra ; Henri )), is the opposition between long verb forms and short verb forms. This opposition has given rise to two-cell paradigms which encode tense distinctions (in Louisiana Creole) or argument properties of the verb (in Mauritian Creole). While the form of the verbs is clearly of French origin, the long/short opposition is of substrate origin (Veenstra ): ()

Mauritian Creole (adapted from Henri ) ‘exist’ ‘come’ ‘break’ long form

existe

vini

brize

short form

exis

vin

briz

.. Form–meaning relation Having briefly surveyed some of the ways in which creole inflection developed, we will now provide an overview of some of the inflectional patterns attested in creoles. Besides the more straightforward cases of simple exponence (i.e. in which a given set of morphosyntactic features is systematically expressed by one exponent), there

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



are numerous cases in which the expression of inflectional meaning is less predictable (such as fusion, syncretism, extended exponence, allomorphy, inflectional classes, among others). The latter type, which is typically associated with inflecting languages, convincingly shows that the form–meaning correspondence in creole inflection is not necessarily biunique. We first turn to portmanteau forms, which realize two sets of morphosyntactic features within one single morph. One remarkable example of portmanteau affix can be found in the Santiago variety of Kabuverdianu. In this creole, verb forms expressing ‘anterior’ take the suffix ‑ba while verb forms expressing ‘passive’ take the suffix ‑du; however, verb forms which express both ‘anterior’ and ‘passive’ must take the portmanteau suffix ‑da (Lang : –), as shown in (). ()

Bolus kume-da cake eat-. ‘The cakes had been eaten.’

(Kv)

Another example of the non-correspondence between form and meaning is provided by multiple exponence in Korlai Indo-Portuguese (Luís ). As shown in (), and repeated below for convenience, the past forms given in () show that the first conjugation class expresses the feature ‘past’ by selecting both a theme-less stem and the past suffix ‑o, whereas the nd and rd conjugations only select the tense suffix ‑w which is attached to the unmarked vowel-final stem. Crucially, the theme-less stem and the suffix ‑o are exclusive to first conjugation past tense verb forms. () a.  : kat-o [o] ‘sang’ b.  : bebe-w [w] ‘drank’ c.  : irgi-w [w] ‘got up’

(KP)

The past tense forms in () illustrate another phenomenon which is typical of inflectional paradigms, namely the presence of stem allomorphy and affix allomorphy: in (), there is stem allomorphy because   lexemes exhibit two stems (kata- and kat-); and there is also suffix allomorphy because the past paradigm has two past tense markers (‑o in   and ‑w in   and  ). Assuming that this kind of allomorphy is an example of morphomic organization and hence a good indicator of genuinely morphological structure (Aronoff ; contributions in Luís and Bermúdez-Otero ), the fact that it is present in creole languages clearly shows that the survival of inflection may not have been triggered solely by communicative goals (Plag ) and that morphomic structure does arise in creoles (Luís ). In the case of Indo-Portuguese creoles, these show a strong preference for maximizing morphomic contrasts through conjugation class distinctions (cf. ()). As illustrated by Bonami, Henri, and Luís (), morphomic structure also increases the opacity of inflectional paradigms in creole language. In the case of Korlai Indo-Portuguese, there are two reasons why its verbal inflection is not more transparent than that of noncreoles. One the one hand, there is the presence of syncretism in the u-conjugation (between the unmarked and the past) and, on the other, there is the opacity between the st and nd conjugation completive forms, which take the same stem, even though they belong to different conjugation classes.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

In sum, then, while the paradigms of the creole languages examined in this section are much smaller than those of their superstrate languages, they exhibit properties which can also be found in non-creoles. Such properties may even increase the opacity of the verbal paradigms. Research into creole inflection must therefore look beyond questions of sheer size or number and focus instead on the paths that these languages, despite their small paradigms, have developed for the expression of morphosyntactic features.

.. Analysing creole inflection In this last section, we illustrate how some of the inflectional phenomena surveyed earlier in §. can be straightforwardly accounted for within morphological theory. To illustrate our claim, we adopt the theory of Paradigm Function Morphology (henceforth PFM) (Stump , see also Stump, Chapter  this volume), one of the most fully formalized theories of inflectional morphology and particularly well suited to capture form–meaning deviations and paradigmatic relations (Spencer ; Stump a). Within PFM, word structure is derived through a set of operations, known as Realization Rules (henceforth RRs). RRs, which realize inflectional exponents, take as their input preexisting features that are associated with paradigm cells. For the verbal paradigm of Korlai Indo-Portuguese (Luís ), we formulate the Realization Rules in ().15 ()

Realization Rules Rule Block I a. RR I, {Tense: past}, V[Class ] () b. RR I, {Tense: past}, V[Class ,] () c. RR I, {Tense: progressive}, V () d. RR I, {Tense: completive}, V ()

= def = def = def = def

To capture the fact that all tense endings compete for the same position, the RRs in () all belong to the same rule block, namely block I. This means that all four tense features are mutually exclusive. The RRs in (a) and (b) realize the feature ‘past’, while (c) realizes the ‘progressive’ and (d) the ‘completive’ features. Because both (a) and (b) are sensitive to conjugation class, the RRs state that (a) applies to first conjugation verbs whereas (b) applies to second and third conjugation verbs. Given that there is no overt inflectional affix realizing the base form, we assume that this unmarked verb form is derived through the application of a more general realization rule, known as Identity Function Default (Stump ), which takes as input the stem and delivers the same phonological form: f(X)=X. The key idea is that not being overtly marked can be just as significant as being marked with some sort of affix. This is a universal default, which applies within any rule block in case no explicit rule has applied.

We are here using the notation in Stump (: ): RR n,τ,C () =def . Each RR carries three indices: the index n, which identifies the particular block to which the rule belongs; the index τ, which indicates the set of features that the rule realizes; and the index C, which identifies the class of lexemes or subclass of lexemes to which the rule applies. 15

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



Table .. Stem allomorphy in Korlai Indo-Portuguese kata ‘sing’  

bebe ‘drink’  

irgi ‘get up’  

loʈu ‘push’  

Default stem

kata(for base, completive, and gerund forms)

bebe(for base, past gerund forms)

irgi-

loʈu-

Secondary stem

kat(for past forms)

bebi(for completive forms)





Turning now to (morphomic) stem-formation, Korlai Indo-Portugese verb lexemes also vary with respect to both the shape of the stems they take and the number of stems they select. As illustrated in Table ., Class  and Class  select the i-stem and u-stem, respectively, for all the verb forms: for example irgi, irgiw, irgid, irgin. On the contrary, Class  and Class  select the default stem, but for a different subset of forms: Class  verbs take the secondary stem for the past and the default stem for all other forms. Class  verbs take the secondary i-stem for the completive form. Overall, there are five stems: the a-stem, the e-stem, the i-stem, the u-stem, and the unmarked stem. Following Table ., stem-selection rules (SSRs) as in () may be formulated. ()

Stem selection rules for Rule Block 0 Where X is the root of some lexeme and where Y is one of the lexeme’s stems, a. RR 0, V[Class 1]; {Tense: base}() = def , where Y is X. b. RR 0, V[Class 1] () = def , where Y is Xa. c. RR 0, V[Class 2] () = def , where Y is Xe. d. RR 0, V[Class 3] ∨ V[Class 2] {Tense: completive} () = def , where Y is Xi. e. RR 0, V[Class 4] () = def , where Y is Xu.

The SSRs in () directly associate a particular stem with a particular set of morphosyntactic features independently of any morphological operation: the stem in (a) is identical to the root and therefore theme-less; (b) selects the default stem for Class  verbs, except for verb forms with the feature tense ‘base’ which select the stem defined in (a). The SSR in (c) applies to all Class  verb forms, except the completive forms: these select the stem in (d), which is also the default stem for all Class  verb forms. The rule in (e) derives the default stems for Class  verb forms. With RRs and SSRs in place, we propose the Paradigm Function in (). The role of the PF is to derive each inflected form of a lexeme as a cell in that lexeme’s paradigm.16 A concrete example of how the PF operates is given in ().

16 Following Stump (), () says that the PF takes the pairing (where X is the root of a lexeme and σ is the complete set of morphosyntactic properties realized by the verb form) and yields the cell in the lexeme’s paradigm.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´ ı

()

PF for the verbal paradigm of Korlai Indo-Portuguese PF () = def RRI (SSR0 (X, σ>)) = def

()

PF for irgid ‘got up’ PF (), where σ ={Tense: completive} a = def RRI (SSR0 ()) b = def RRId (SSR0d ()) c = def

The first line specifies the morphosyntactic feature set of the inflected word form. The second line identifies irg- as the root of the lexeme IRGI. For the realization of irgid, the feature set σ triggers the application of SSR0d from block  and RRId from block I. Based on the rule block index specified for each set of rules, we assume that RRId takes as input a verbal stem and delivers an inflected form expressing the completive tense. Turning briefly to portmanteau affixes, as illustrated by the Kabuverdianu passiveanterior suffix ‑da (cf. ()), the mismatch between form and meaning might be captured by formulating an RR that maps ‘passive’ and ‘anterior’ onto one exponent, as in (a). The assumption would be that the RRs in (a) will preempt the separate realization of ‘passive’ and ‘anterior’, applying Pāṇini’s principle (Stump ). ()

Realization Rules for Kabuverdianu (Santiago variety) a. RRI{: passive; : anterior},V () = b. RRI{: passive},V () = < Xdu, σ> c. RRI{: anterior},V () = < Xba, σ>

. C

.................................................................................................................................. The developments that have taken place in creole morphology during the two last decades have crucially altered many of the earlier assumptions. These developments have not only shown that creole morphology does exist but also that creoles can be subject to a formal morphological analysis. Creole studies and morphological theory have been brought closer and an increasingly important role is now attributed to morphological theory in accounting for various fundamental properties of creole morphology and in showing that there are numerous morphological commonalities between creoles and non-creoles that provide a common ground for morphological analysis. Despite the sociocultural contexts that define creole genesis, the morphological principles at work in creole word-formation and inflection are, by and large, identical to those that motivate current linguistic research in morphology in general, especially once the creole has been formed. For reasons of space, we restricted our discussion to only three morphological phenomena (i.e. full reduplication, derivation, and inflection) and to just two morphological theories (i.e. Construction Morphology and Paradigm Function Morphology), leaving many other morphological phenomena and their theoretical status unaddressed. We hope that future research will explore more formal accounts, adopting these or other formal approaches, on these and other empirical phenomena. The extent to which creole morphology may contribute

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



to morphological theory can only be fully assessed if morphological theory and creole languages go hand in hand. Progress on theoretical investigations, however, is highly dependent on the availability of linguistic corpora and wider empirical coverage of creole phenomena. Ultimately, a joint multidisciplinary effort from creolists and morphologists may be needed to ensure that more empirical data is collected, described, annotated, and made widely available.

. F R Due to space limitations we left aside important contributions and theories that are also relevant for the topic of this chapter. For a more complete survey of creole morphology and the issues raised by the data see, among others, the collection of papers in Booij and van Marle () and Luís (), entirely dedicated to creole morphology. An overview of creole morphology is also given in the chapter by Crowley (). For more detailed studies on specific word-formation processes not addressed in this chapter (e.g. compounding and conversion), we recommend language-specific studies such as Dijkhoff () on Papiamentu, Braun () on Early Sranan, and Steinkrüger () on Chabacano.

A I would like to express my sincere thanks to Jenny Audring and Francesca Masini for their support and patience throughout the preparatory stages that led to the submission of this chapter. Many thanks also to Clancy Clements, Grev Corbett, Silvia Kouwenberg, Jeff Siegel, Armin Schwegler, Andrew Spencer, and two anonymous reviewers for helpful comments on previous versions of this chapter.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

     ......................................................................................................................

 ̈  

. I

.................................................................................................................................. M theories are usually set up as (part of) theoretical frameworks that intend to provide a synchronic model of grammar. Therefore, they develop a complex metalanguage for the description of linguistic structure that is intended to work, in principle, for all languages. Recently, Haspelmath (: ) challenged the framework conception of linguistic description, because frameworks “set up expectations about what phenomena languages should, can, and cannot have, and once a framework has been adopted, it is hard to free oneself from the perspective and the constraints imposed by it”. He rejects the widespread idea that ‘grammatical theory’ has to be identified with ‘theoretical framework’. If we want to approach languages without prejudice, we should, according to Haspelmath, avoid frameworks, because any framework can be seen as a prejudice, likely to channel the analysis in a certain direction, thereby neglecting other aspects. One of these aspects, often neglected or explicitly excluded from the empirical domain of such theoretical frameworks, is the basic insight that human language is always subject to and the result of change. Variation and change are essential ingredients of any human language, but at the same time they form a challenge for theoretical models of grammar. This is not the place to further elaborate on Haspelmath’s plea for a ‘framework-free grammatical theory’, but I would like to point at one more aspect. One of the requirements commonly formulated for grammatical theories is that of ‘descriptive adequacy’. The description should reflect the speakers’ internal generalizations as closely as possible. However, as Haspelmath (: ) points out, it is far from clear whether this “is an attainable goal because often different generalizations are compatible with the facts, and we have no way of knowing which generalization is adopted by the speakers (note that it could be that different speakers have different generalizations)”. From the viewpoint of diachronic linguistics, the possibility of different generalizations is important because it might serve as a starting point for (possible explanations of) language change (cf. §.).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



Jackendoff, too, formulates some criteria of adequacy, which each model of grammar should conform to. Central is the criterion of ‘graceful integration’: the model “should seek a graceful integration of linguistic phenomena with what is known about other human cognitive capacities” (Jackendoff b: ). I would like to extend this criterion since, in my view, any theoretical account of language should not only gracefully integrate with neighboring domains, it should also allow for the incorporation of and be in harmony with the findings in related sub-domains such as historical linguistics and sociolinguistics. In this chapter, I will discuss some notions and phenomena encountered in morphological change that might or should be relevant to morphological theory. Of course, not every theory has to explain everything, but—in line with Jackendoff ’s criterion—a good theory should at least be compatible with the facts found in historical linguistics and not ignore them or explain them away as unrelated to the grammar and belonging to (theoretically irrelevant) performance phenomena. I will not try to provide a systematic overview of different types of morphological change, since this has been done by other scholars and in other handbooks (e.g. Koefoed and Van Marle ; Anderson ). In particular, I will confine myself to lexical morphology, that is, word-formation. Changes in inflectional morphology have always been central to the study of historical linguistics. There is a considerable amount of literature on inflectional change, it is a topic treated in almost every introduction and every handbook on historical linguistics. Compare for an overview and for references the recent contributions by Claire Bowern and Maarten Kossmann in the Oxford Handbook of Inflection (Baerman ). Word-formation phenomena, on the other hand, have received much less attention. Though related, they are of a different nature and they pose different questions, especially with respect to the notion of productivity, which, in my view, is essential for an adequate understanding of historical word-formation. In the remainder of this chapter, I will focus on some selected phenomena and concepts that, in my view, are central to the study of changes in word-formation and that, therefore, should be reflected on by morphological theory. Ralli (Chapter  this volume) discusses synchronic variation in morphology; therefore, I will not explicitly broach the issue of variation here. It is, however, a well-known fact that variation and change in language are two sides of the same coin. While variation does not necessarily entail language change, without (synchronic) variation there is no (diachronic) change. Therefore, variation is central to this chapter, too, and any grammatical theory that wants to do justice to the dynamic nature of language should be sensitive to variation. This includes regional variation as well as variation determined by pragmatic, stylistic, and sociolinguistic factors. I will start with some general considerations about change in word-formation (§.). Then I will move on to the notion of ‘reanalysis’ (§.), which is a central concept in historical linguistics. Since many changes in word-formation are due to a process of reanalysis, this notion should be pertinent to morphological theory, too. After that, I will discuss word-formation change as change in productivity and the consequences of the historical view for conceptions of productivity in morphological theory (§.). I will also present considerations about factors relevant for the explanation of morphological change. The closing section (§.) will contain some remarks about the possibilities of a theory of word-formation change. The examples used for illustration purposes in this chapter are taken from Dutch, German, and English.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 ̈ 

. W- 

.................................................................................................................................. What counts as morphological change? Generally speaking, every development that changes the structural possibilities of creating or modifying lexical items might be seen as change. Munske () provides an overview of the main types of change in wordformation. He distinguishes changes with respect to the formal and functional (semantic, pragmatic) properties of an existing pattern from those diachronic developments that lead to changes in the inventory of word-formation patterns, that is, the loss of patterns or the rise of new patterns. The loss of a pattern can be illustrated by the verbal prefix te- in Dutch (the counterpart of German zer-), which has been used productively in Middle Dutch (tebreken ‘to break sth.’, teslaan ‘to smash sth.’, testoren ‘to destroy sth.’), but lost its productivity afterwards. In this case even the words that were formed with this prefix disappeared, so that no traces of this once productive word-formation process are left in present-day Dutch. We find a similar development in English. Old English, too, had this prefix (tobrecan ‘to break sth.’, tobeatan ‘to smash sth.’) but the pattern fell out of use during the Middle English period and the remaining words disappeared during the seventeenth century (cf. Koziol : ; see Hüning  for a tentative explanation). New patterns emerge through a variety of different processes. Munske () mentions three different types: (i)

The grammaticalization of compound constituents into affixes. Well-known textbook examples are the development of the adjectival suffix ‑lich from the noun lîh (which originally meant ‘body’) in German (compare English ‑ly or Dutch ‑lijk) or the rise of the English suffix ‑hood, according to the OED originally a distinct noun, meaning ‘person, personality, sex, condition, quality, rank’ (compare German ‑heit or Dutch ‑heid) (Trips ). (ii) Borrowing, i.e. the ‘reactivation’ or emergence of patterns on the basis of loanwords. This happens for example when affixes are extracted from loanwords. This may eventually lead to the emergence of new word-formation patterns, as in the case of the English nominal suffix ‑ment (from French) or prefixes like ex- (from Latin) (Hoppe ). (iii) Univerbation and incorporation. Compounding through incorporation is a common process in Germanic languages. As an example, we can refer to separable complex verbs like ademhalen (‘to breathe’, lit. ‘breath-take’) or koffiezetten (‘to make coffee’, lit. ‘coffee-make’) in Dutch, and ehebrechen (‘to commit adultery’, lit. ‘break-marriage’) or notlanden (‘to make an emergency landing’, lit. ‘emergencyland’) in German. Their status (are they words? are they phrases?) is, however, controversial. Booij (a: ch. ) provides a detailed analysis under the heading of ‘quasi-noun incorporation’.

One of the best-known examples of a syntactic structure turning into a morphological pattern is probably the development of nominal compounds with a genitive noun as first

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



element. Dutch het herenhuis ‘the manor house’ (< des heren huis ‘the gentlemanGEN house’) or German der Meeresgrund ‘the sea floor’, die Tageszeit ‘the time of day’ (< des Meeres Grund ‘the seaGEN floor’, des Tages Zeit ‘the dayGEN time’) are examples of compounds in which the first element kept its original case marker. The pattern became productive and replaced a lot of NN compounds: while Luther still had ratherr and blutfreund in the sixteenth century, these words were replaced by Ratsherr ‘councilman’ and Blutsfreund ‘blood brother’ afterwards (Henzen : ). The genitive suffix got reanalyzed as a mere linking element and it has been extended to compounds where it is no longer analyzable as a genitive suffix. Feminine nouns like Liebe ‘love’ or those ending in ‑ung, for example, do not have morphological marking of the genitive. However, when they are used as the first element in a nominal compound, the ‑s is added as a linking element: Liebesbeweis ‘sign of love’, Nahrungsergänzungsmittel ‘dietary supplement’. As pointed out by Anderson (), many kinds of historical change have effects on the morphology of a language, be it inflectional morphology or word-formation. The few examples mentioned above already illustrate that, for a proper understanding of wordformation change, we will have to take into account factors related to other domains of linguistic structure (like syntax) as well as extra-linguistic factors (such as language contact and borrowing). A more restrictive view can be found in Scherer (). Since changes in the inventory of word-formation processes typically involve factors from other domains, Scherer excludes them from her concept of word-formation change. She wants to restrict this notion to changes that are specific to word-formation processes. Therefore, she focuses on existing patterns and on changes in the structural characteristics of such patterns. For her, “the only kind of change that is specific to the domain of word-formation is the change of wordformation constraints. This change of constraints is reflected in changes of morphological productivity” (Scherer : ). The constraints concern the (formal and semantic) features of the input and the output of a given word-formation process. This modular view aims to exclude from the study of word-formation change the ‘interface phenomena’, that is, all changes that can be traced back to the interference with other components of grammar. While it might be a good heuristic means for the identification of relevant factors in a certain change, it remains subject to debate whether such a modular view is a sensible and tenable position with respect to word-formation change in general.

. R

.................................................................................................................................. The reanalysis of linguistic structures by the language users is one of the most important processes in language change. In this section I want to discuss some examples of reanalysis, “the commonly used term for a change by which a complex word comes to be regarded as matching a different word-schema from the one it was originally created by” (Haspelmath : ). Subsequently, the new word-schema can gain productivity when its properties are extended to new material. This might eventually lead to the emergence of a new pattern and/or a new affix.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 ̈ 

.. Affix-telescoping A well-known type of reanalysis is the combination of two formerly independent affixes into a new compound affix, also known as ‘affix-telescoping’ (Haspelmath : ). An example is the derivation of Dutch nouns in ‑igheid. This sequence of the two suffixes ‑ig and ‑heid “has become a suffix of its own, with a specific meaning that is not computable on the basis of the meanings of ‑ig and ‑heid” (Booij c: ). Booij wants to see nouns like gekkigheid ‘foolishness’ and narigheid ‘misery’ as derived directly from gek ‘foolish’ and naar ‘unpleasant’. The corresponding adjectives in ‑ig (gekkig, narig) are described as nonexistent, but possible words of Dutch. However, Booij does not want to see them as intermediate steps in the derivation of the nouns, because of the direct semantic relationship between the bare adjectives and the nouns in ‑igheid. The meaning of [A+ig]A ‘somewhat A’ is, after all, not part of the meaning of the nouns. In Hüning () I have presented a detailed analysis of the history of Dutch ‑erij. Many of the nouns with this suffix have a direct equivalent in ‑ery in English (bakkerij ‘baker’s shop, bakery’, vleierij ‘flattery’) and the suffix is available in German, too (‑erei) (cf. Eisenberg ). The suffix can be seen as another case of ‘affix-telescoping’ and I have discussed the emergence of Dutch ‑erij and of its allomorphic variants (‑arij, ‑derij) as a result of language-internal and -external factors (like borrowing from French). Many deverbal nouns in ‑erij could and still can be analyzed in different ways: visserij ‘fishery’ can be seen as a denominal derivation (N+ij) from the person noun visser ‘fisherman’, which itself is derived from the verb vissen (‘to fish’), and/or as directly derived from the verbal stem (V+erij). The oldest attestations of the suffix are clearly denominal (with ‑ij), but later on there can be no doubt that ‑er+‑ij has been reanalyzed as one suffix ‑erij. It occurs in deverbal words like smederij (from smeden ‘to forge’), where it cannot be denominal because of the absence of the person noun *smeder, which is blocked by the existence of smid ‘(black)smith’. The use of the suffix got extended to action nouns like knoeierij (from knoeien ‘to make a mess, to tinker’) or warhoofderij (from warhoofd ‘scatterbrain’), both denoting a certain behavior indicated by the base word (with negative connotations). While the derivative knoeierij could also be related to the person noun knoeier ‘messy, sloppy person’ (which means that we still could assume a suffix ‑ij), we have only one possible analysis in the case of warhoofderij: it has to be derived from the noun warhoofd by means of the suffix ‑erij. Alternative interpretations are not possible because of the absence of a verbal counterpart and of a corresponding person noun in ‑er. The mechanism behind this type of reanalysis is usually described as the interpretation of second degree derivatives as derivatives of the first degree, and we could try to formalize this kind of reanalysis as follows: ()

[[V+er]N + ij]N ! [V + erij]N / [N + erij]N

Such a formalization is, however, an oversimplification. As I have shown in Hüning (), it does not do justice to the complexity of the process. It misses a lot of factors relevant to the processes of change that are involved. In particular, it abstracts away from the manifold paradigmatic relationships between simplex and complex words that allowed (and still allow) for different interpretations and generalizations.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



The formalization in () suggests the replacement of the affix combination by the new affix ‑erij. But very often it is not a question of either/or. Both interpretations remain possible for many of the complex words in ‑erij, even in present-day Dutch. At the same time, both affixes, ‑ij and ‑erij, are used for the formation of new words. A denominal word like scheidsrechterij ‘the business of a scheidsrechter (‘referee’)’ is formed with the suffix ‑ij, while denominal smeerlapperij ‘the behavior of a smeerlap (‘skunk, son-of-a-bitch’)’ is definitely formed with ‑erij. More important than the form of the affix is the form of the resulting noun. It has to end in ‑erij (*smeerlappij is not possible). The properties of the output, the resulting noun, also prevail over the characteristics of the input. The base words can be verbs or nouns, they can be simplex or complex, and even multiword units are frequently used. To give only one example: the Dutch proverb de kat uit de boom kijken (lit. ‘look the cat out of the tree’, meaning ‘wait to see which way the cat jumps’) is the base of the noun de-kat-uit-de-boom-kijkerij, which denotes the behavior described by the proverb. The resulting complex noun, thus, joins the large category of action nouns in ‑erij denoting a (human) behavior, which is one of the patterns ‑erij is used for. Again, the possibilities are determined by the result, that is, the categorial, semantic, and formal characteristics of the output.

.. Resegmentation Koefoed and Van Marle (: ) mention another type of reanalysis which they characterize as a process of ‘resegmentation’ of existing complex words. As an example they discuss the rise of the suffix ‑enaar in Dutch. Dutch has person nouns in ‑aar from base words ending in ‑en: leugen-aar ‘liar’, molen-aar ‘miller’, and Leiden-aar ‘inhabitant of Leiden’. The ending ‑en is not a suffix in these words, but part of the stem. The resulting person nouns, however, share the phonological property that they end in the string ‑enaar. This similarity has, apparently, enabled the formation of new words in which ‑enaar has to be analyzed as an affix, like bult-enaar ‘hunchback’ and Delft-enaar ‘inhabitant of Delft’, based on bult ‘hunch’ and Delft, respectively. This means that a phonological property of the person nouns has been assigned a morphological status. Koefoed and Van Marle stress that the resegmentation of ‑enaar has not been governed by syllable structure of the original person nouns. The syllable boundary never lies before the schwa, but inside the relevant string (Lei.de.naar, Delf.te.naar). They discuss another example, the rise of the suffix ‑ling in Dutch, where the syllable structure probably has influenced the resegmentation, but “Dutch ‑enaar makes clear that syllable structure is no necessary factor in the phonologically governed resegmentation of complex words” (Koefoed and Van Marle : ). Phonological similarity alone is sufficient for this process. Similar processes of resegmentation can be found in the emergence of German ‑ler or ‑ner next to the agentive suffix ‑er (cf. Fleischer and Barz ).

.. Grammaticalization and affixoids The third process I want to mention here is the development of a lexical item, a free/ unbound morpheme, into a bound morpheme, an affix. Many derivational affixes have developed out of lexical items through their use in compounds. Textbook examples such as

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 ̈ 

the emergence of English ‑ly and ‑hood (German ‑lich, ‑heit; Dutch ‑lijk, ‑heid) have already been mentioned in §.. The reanalysis of a compound element as an affix has been described as a case of ‘grammaticalization’ in the literature (Munske ; Hopper and Traugott ). It is characterized as a development from a lexical to a grammatical item, with some intermediate steps. The ‘grammaticalization cline’ could be sketched as follows: ()

lexical word ! affixoid ! derivational affix ! inflectional affix

A lexical word can develop a more general and more abstract meaning (‘semantic bleaching’), which is often bound to its use in compounds. It may be used as an ‘affixoid’, which means that the compound element corresponds to the lexical word with respect to its form, but not (or only in part) with respect to its meaning. This notion of affixoid is central to many studies of grammaticalization in word-formation. It indicates the intermediate stage in the ‘grammaticalization cline’, that is, the development from lexical item to affix (Stevens ). Many intensifying elements in adjectival compounds, for example, have been classified as affixoids, like Dutch kei (a noun meaning ‘boulder’) in keigoed ‘very good’, keihard ‘very hard’, etc. or German Stock (a noun meaning ‘stick’) in stockbesoffen ‘very drunk’, stockkonservativ ‘very conservative’, etc. As an illustration, we can also refer to the use of Dutch hoofd and German Haupt (both originally nouns meaning ‘head’) in compounds. Dutch hoofdpijn ‘headache’ still shows the compound structure with the original noun as its first element: hoofd refers to the head of a body. A more abstract, metaphorical meaning (‘uppermost’) can be found in compounds referring to a hierarchy, like hoofdkantoor ‘head office’ or hoofddirectie lit. ‘head directorate’. Even more abstract is the ‘most important’ meaning: hoofdbezwaar ‘main objection’ or hoofdingang ‘main entrance’. The abstract meanings are only found in compounds (a phrase like *de ingang is (een) hoofd ‘the entrance is (a) main’ is not possible) and it is with these bound meanings that hoofd might be called an affixoid (Booij a: ). When we put this in a historical dimension (a grammaticalization cline), German Haupt is even one step ahead in the development. While in Dutch hoofd is the common word for ‘head of a body’, this meaning is almost lost for German Haupt, the common word for a head being Kopf. The use of Haupt is largely restricted to the word-formation pattern with the abstract meaning ‘main X’ in present-day German: Hauptargument ‘main argument’, Hauptattraktion ‘main attraction’, Hauptbahnhof ‘main station’, Haupteingang ‘main entrance’. Because of the obsolescence of Haupt as a common noun, the link between the present-day use and the original noun is being lost by and by. Therefore, some scholars prefer to label Haupt- as a prefix synchronically (rather than an affixoid). While ‘affixoid’ seems to be a handy notion to indicate the in-between state of compound elements, this notion remains highly controversial in the relevant literature. Some scholars (e.g. Schmidt ) suggest avoiding it altogether, others prefer to establish affixoids as a special morphological category. Elsen (), for example, even argues in favor of a new (synchronic) word-formation process “affixoid formation”, which should be distinguished from compounding and derivation (see Booij and Hüning  for a discussion). Another controversial issue is whether the emergence of derivational affixes should really be considered a case of grammaticalization. Unlike inflectional affixes they are

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



used for the formation of new lexical items, and therefore, according to Lehmann (: ), the processes under discussion should rather be classified as lexicalization. A more general problem is connected with the ‘element based view’ on grammaticalization (Himmelmann ). After all, it is not the isolated lexical item that becomes an affix; the relevant units are the complex words in which the item gets new interpretations and new functions. This means that we might be better off with a constructional approach that “suggests not a cline, but a taxonomic network of related constructions” (Trousdale : ) to account for these changes. Hence, Hüning and Booij () suggest analyzing the emergence of derivational affixes out of compound elements as cases of ‘constructionalization’.

.. Implications for morphological theory What do such developments and such changes tell us about the nature of word-formation? What are the consequences of such examples of reanalysis for morphological theory? First of all, they can serve as illustrations for the basic insight that language structure is dynamic rather than static. Through the ages, expressional needs can lead to new interpretations and to new morphological possibilities. New words are formed by applying the patterns found in existing words to new material, but also the patterns themselves are subject to change. They are extended to new (classes of ) base words and they are used to realize new meanings, slightly different from the ones found in the existing words. The reanalysis of existing forms may eventually lead to the emergence of new word-formation categories. It may lead to new generalizations which subsequently can be used for the formation of new words. But reanalysis is not an abrupt change. We do not immediately, if at all, get a clear-cut new rule. Variation, ambiguity, and vagueness are very common and allow for different generalizations on the part of the language users. Reanalysis, like most changes, is a gradual process, in which analogy plays an important role. Jackendoff and Audring (Chapter  this volume) are probably right in assuming that a learner acquires a pattern or a schema by “observing some collection of words with similar phonological and semantic structure, and formulating a hypothesis about the general pattern they instantiate” (§..). At first, such a schema is tentative, it is a hypothesis that can be confirmed and strengthened by further evidence. Rule discovery or schema building (and change), however, is a process not only active in language acquisition, but also in the language use of adult speakers. It is “clearly wrong to associate all processes of rule discovery or rule creation exclusively with language acquisition by young children” (Koefoed and Van Marle : ). Formal theories tend to account for morphological structure by means of lexical entries for words and affixes and by deterministic symbolic rules. The developments we observe in historical morphology, however, show that those categories are themselves gradual and subject to change. The examples of reanalysis given earlier in §. may serve as an illustration. There is a lot of evidence, especially from psycholinguistics, that even synchronically, “morphological structure is indeed intrinsically graded” (see Hay and Baayen  for an overview and for references). Even categorical distinctions like the one between words and affixes or between simple words and complex words seem to be anything but categorical for language users. Experimental research supports the findings from historical

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 ̈ 

morphology: for a proper understanding of the dynamic nature of morphological structure and morphological change, we need to allow for probability in grammar and for probabilistic structures in morphology. The more general question is whether language users really apply abstract rules or schemas at all. It might be the case, as Bauer (: ) speculated some thirty years ago, that “language users work by analogy whereas linguists interpret such behavior in terms of rules, so that a linguist’s description is inevitably a fiction”. The question whether word-formation rules or schemas are part of the internal grammar of language users or figure only in linguistic abstractions is subject to debate. While most formal theories assume some psychological reality for abstract rules, usage-based theories are more skeptical about this point. In exemplar-based approaches, paradigmatic analogy is seen as the major mechanism in morphology (e.g. Bybee ). The advantage of this view is that it probably accommodates more easily to the dynamic structures found in language change. The exemplar-based approach has also been used for analogical modeling of changes in morphology with quite some success, for example by Chapman and Skousen (). They, too, emphasize its conceptual advantage: “The same mechanism that produces language produces language change. Speakers invoke analogy every time they produce language, so the potential for creating new forms is present with every utterance” (Chapman and Skousen : ). While many historical changes seem to fit naturally into such an approach to morphology, theoretical models assuming abstract representations as central to the processing of morphologically complex items will have to find ways to ‘gracefully integrate’ probability, dynamics, and the creativity of language users into their models in order to account for (analogically motivated) extensions and changes of existing structures, as in the cases of reanalysis discussed earlier in this section.

. P

.................................................................................................................................. According to Aronoff (: –) “the simplest task of a morphology, the least we demand of it, is the enumeration of the class of possible words of a language.” This means that every theory of morphology has to find a way to account for the productivity of morphological processes, that is, the possibility of creating new lexical items. Productivity is often seen as a property of symbolic word-formation rules. As Bauer (: ) rightly points out, “we can speak of productivity in synchronic terms, or of changes in productivity in diachronic terms, but not of productivity as such in diachronic terms.” However, although productivity is not a diachronic notion, it is an important notion in diachronic studies of word-formation as well, since, ultimately, all changes in word-formation might be seen as changes in productivity.

.. Morphological change as change in productivity Over time, a certain morphological process can become productive, its productivity can change, or it can lose its productivity. With regard to the loss of productivity, one might

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



think of English ‑th, which must have been used productively in former times as an affix for the formation of abstract nouns. We still have words like warmth, width, or length in which we can identify this suffix, but it cannot be used to form new nouns in present-day English. Other examples are the verbal prefixes te- in Dutch and to- in English, mentioned above in §.. On the other side, we find lots of new word-formation patterns that are gaining productivity over time. A recent example is the use of ‑gate as a suffix for the formation of nouns designating a (political) scandal. The process became productive in English1 and it is this productivity that we have to account for in morphology. Through borrowing, other languages adopted this possibility and by now suffixation with ‑gate has become a productive word-formation process in some of these languages, too (discussed later in this section). Things get complicated methodologically when it comes to changes in the degree of productivity of existing word-formation processes. It has been pointed out time and again that productivity is not a matter of all or nothing, but of more or less (cf. for example Bauer : ). Hence, in the literature on productivity, we find a basic agreement that productivity is not an absolute notion but rather a gradual one. To determine the degree of productivity, we need some notion of probability. How likely is it that an existing wordformation process will lead to new formations? What factors determine its applicability? Some scholars have introduced the notion of ‘semi-productivity’ for processes that “can to some degree be extended to new forms” (Pinker and Prince : ), but without a further specification of what is meant by ‘some degree’. For Jackendoff (: , : ) semi-productivity seems to be characteristic for word-formation rules in general. They are generalizations about existing complex words that have to be learned and stored individually. Such ‘lexical redundancy rules’ can be used creatively, but they cannot be relied on for the formation of novel forms since many complex words show unpredictable irregularities. With respect to the synchronic study of productivity in present-day languages a whole range of sophisticated psycholinguistic and corpus-based methods have been developed to determine the degree of productivity of individual morphological processes, especially by Harald Baayen and his collaborators (see Baayen  for an overview). With respect to changes in productivity, however, we are still at the beginning and the methodological problems and challenges are enormous. Since psycholinguistic experiments are not available for diachronic studies and since the size and the quality of historical text corpora are still insufficient for the use of sophisticated statistical means in many languages, historical linguists are often hesitant about the notion of productivity in diachrony. They tend to rely on historical dictionaries to determine the number of words (i.e. word types) of a certain word formation pattern for different historical stages. Tentative statements on changes in productivity are based on the comparison of these data.2 This means that we resort to a fairly simple notion of productivity, like the one found in Bauer (: ): “A morphological process can be said to be more or less productive according to the number of new words which it is used to form.” By counting types, we confine ourselves to what has been called the ‘realized productivity’ of a certain word-formation process (Baayen ). The English Wikipedia has a long list of scandals with the ‑gate suffix (https://en.wikipedia.org/ wiki/List_of_scandals_with_'-gate'_suffix). 2 See Plag () for a discussion of the limitations and the possibilities of the use of dictionaries in studies about morphological productivity. 1

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 ̈ 

Dictionary data have been used, for example, for the examination of rival suffixes like ‑ness and ‑ity in English (Romaine ; Aronoff and Anshen ). These studies show that ‑ness is older, they show an increasing use of ‑ity from the thirteenth century onwards and they also show an increase in the number of nouns taking both suffixes. While these are of course important results, the procedure of comparing the number of types for different periods by making use of historical dictionaries “gives only a partial glimpse of productivity because dictionaries list only actual, attested and not possible words” (Romaine : ). This procedure will not tell us anything about the probability or the applicability of a certain word-formation process for the formation of new words. This is central to the notion of productivity as used in synchronic studies, but it is probably out of reach in historical research that relies on dictionary data. It has been claimed that the best diachronic studies can arrive at is the description of changes in the restrictions on a word-formation process. These restrictions can be defined in terms of structural characteristics of the input and/or the output. Among other things, Romaine () discusses the morphological form of the base or the phonological properties of input and output as factors restricting productivity. Syntactic and semantic factors as well as lexical factors might limit the productivity of a morphological process, too. Regarding the potential of a word-formation process, Cowie and Dalton Puffer (: ) write: “one basic assumption is that its productivity is inversely proportional to the number of restrictions which apply to it.” Changes with respect to these restrictions thus influence the productivity of the process. This approach has been successfully applied to the development of German nouns in ‑er by Scherer (). She describes the changing productivity of ‑er as depending on the (changing) characteristics of the input and the output of the pattern. Another example is Demske’s () study on the diachronic development of German ‑ung. She shows that the verbal bases of ‑ung nouns have been subject to changing semantic restrictions, resulting in decreasing productivity of the [V + ‑ung] pattern. Certain verb classes (e.g. inchoative verbs or durative verbs) that have been used as base verbs in Middle High German and Early New High German can no longer be used as input. In a recent paper about word-formation with ‑ung, Hartman (a) adds a cognitive-linguistic perspective to the analysis. Taking up the notion of cognitive ‘construal’ or conceptualization (of entities and situations), he argues “that word-formation patterns carry image-schematic conceptual content, which is subject to diachronic change” (Hartmann a: ). These changes are reflected in word-formation constraints and thus in the productivity of the patterns. However, as Cowie and Dalton-Puffer (: ) point out, the notion of productivity as being inversely proportional to the structural restrictions that apply to the word-formation process has been strongly criticized. First of all, even a not very productive process can be highly productive in a particular, narrowly defined (formal or semantic) niche (Hüning ; Lindsay and Aronoff ). More generally, in many synchronic studies the restrictionapproach has been enhanced with or even replaced by a quantitative notion of productivity that takes seriously the idea of potentiality and probability. With larger and more balanced historical corpora becoming available for English and some other languages, Cowie and Dalton-Puffer (: ) “believe that the progress made in synchronic quantitative approaches to productivity is ripe for exploitation by diachronic studies”. The growing body of corpus-based studies on the interrelation between language use and language

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



change proves that they were right. Hilpert (, ) and Hartmann (b), for example, try to bring together recent developments in corpus linguistics with a usagebased construction grammar approach to word-formation change. This is a promising avenue and it might also shed new light on the interrelation between structural constraints and productivity. In synchronic studies of productivity, hapaxes play an important role as they are thought to indicate the productive use of a word-formation process (see Baayen  for different conceptions and measures of morphological productivity). It has, however, been shown that hapax legomena are not the best indicators of productivity in small samples of text. Since historical corpora are typically much smaller than corpora of present-day language use, Cowie and Dalton-Puffer () adopt an approach of Baayen and Renouf () and look for new types of a certain morphological make-up as they appear in a diachronic corpus through time. When does a word first show up in a historical corpus? This way, they adapt the dictionary-based methodology for empirical, corpus-based data, which seems to be a reasonable step forward. It will most probably help us to find indications for the productive use of a word-formation process and for changes in productivity. Let me return to the ‑gate suffix in order to illustrate this point. When I studied the use of ‑gate, I concluded that the suffix had become a productive means of word-formation in Dutch, but not in German (Hüning ). Fifteen years ago, I could find a lot of formations in Dutch that were not borrowed from English but were formed by speakers of Dutch. German, on the other hand, had almost only loans with ‑gate. At that time it was used at most incidentally as a means of word-formation in German. An interesting formation has been Waterkantgate, a blend from Watergate and Waterkant (a North German word for coast), referring to a scandal concerning the mysterious circumstances of the death of former German Prime Minister of Lower Saxony, Uwe Barschel, in . Waterkantgate seemed to be an early or even the first German ‑gate formation. In a recent corpus-based study, Flach, Kopf, and Stefanowitsch () showed that there have been a few earlier German formations, but the use of ‑gate for the formation of new words has been languishing for quite some time. After Waterkantgate the suffix was only used incidentally for the coining of new words. This changed in the new millennium. Besides dozens of ‑gates borrowed from English, Flach, Kopf, and Stefanowitsch found an increasing number of new words with ‑gate that were clearly formed in German, especially from  onwards. The increasing use of ‑gate is attended by an increasing ‘trivialization’: words in ‑gate are no longer used for political scandals exclusively, but also for ‘trivial tabloidesque incidents’ (like Döner-gate, ‘a footballer throws a doner at someone’). It becomes an appropriate means of word-formation in yellow-press and social media. This means that, by now, ‑gate exhibits a certain degree of productivity not only in Dutch, but in German, too. By analyzing the new types as they appeared throughout the last fifteen years, it has been possible to show the increasing productivity and the changing semantic and pragmatic characteristics of the word-formation process. Since this is, admittedly, not a central category of German morphology, one might ask whether facts like these are of any importance to morphological theory. One might want to defend the view that the theory has to account for the major systematic processes and can ascribe facts like these to performance factors, thereby excluding them from the domain of the theory. My point is, however, that a development like the one described above is by no means exceptional. On the contrary, it is typical for what is going on in

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 ̈ 

word-formation. Productivity is always a matter of degree and changes of productivity are always gradual changes. Therefore, in my view, even synchronic morphological theories have to find ways to include the notion of (gradual) productivity, at least if they aim for some sort of descriptive and/or explanatory adequacy. Booij () is well aware of this necessity. In his ‘Construction Morphology’ (see Masini and Audring, Chapter  this volume), word-formation patterns and processes are accounted for by schemas that specify the form and the meaning of complex words. These schemas not only account for the productive processes but also motivate the properties of sets of existing complex words. The same has been claimed for word-formation rules which are used to formalize productive processes, but at the same time function as ‘redundancy rules’ (Jackendoff ). Therefore, Booij () considers the “need to assign a productivity index to schemas”. He acknowledges the gradual character of productivity and wants to account for the degree of productivity of a schema by using such indices. How this has to be done remains, however, unclear. Booij does not go into any detail and I am not aware of any elaborate attempt to introduce such productivity indices into morphological theory.3 In order to do so, it would be necessary to include usage data into the theory. Such a productivity index would have to reflect the ‘realized productivity’ as well as some notion of probability. This is not a trivial task and we would need a further integration of formal and usage-based approaches to word-formation in order to better understand the phenomenon of morphological productivity. This is, in my view, a prerequisite for any meaningful account of morphological change within theoretical frameworks.

.. Factors and explanations Theories defining morphological productivity exclusively as a property of the language system “have not led to models with predictive power for degrees of productivity” (Baayen : ). Apparently, the degree of productivity of a certain morphological process is only in part determined by language-internal, structural factors. Bauer (: ) suspects that the extent to which a certain morphological process is exploited in language use “may be subject unpredictably to extra-systemic factors”. Most likely, morphological productivity and changes in morphological productivity are subject to a wide range of interacting structural, psychological, and social factors. These factors may belong to different (sub)disciplines like psycholinguistics, pragmatics, stylistics, sociolinguistics, dialectology, contact linguistics, etc. Therefore, Baayen (: ) is probably right when he concludes that “in order to come to a full understanding of the challenging phenomenon of morphological productivity, a truly interdisciplinary data-driven effort is required.” Let me try to illustrate this point. External factors like language contact can of course also lead to changes, and Anderson (: ) points out that even “instances of borrowing or other changes in the lexical stock of a language, since they affect the content of the lexicon, are in some sense ‘morphological’ changes”. As an example, Anderson mentions the borrowing of deverbal Jackendoff and Audring (Chapter  this volume) discuss the distinction of productive and nonproductive schemas in their chapter about ‘Relational Morphology in the Parallel Architecture’. 3

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



nouns in ‑ment from French, which led to a productive pattern in English. Speakers of English extended the systematic relations found among borrowed items to words of the native Germanic lexicon. A great many affixes entered our languages this way. One could mention the success of many prefixes and confixes from Latin and Greek in this context (like anti-, ex-, hyper-, pre-, super-, etc.), and the increasingly productive use of the suffix ‑gate in Dutch and German (discussed in §..) is a recent example. The other way round, English borrowed the prefix über from German. Most probably, Nietzsche’s Übermensch () served as a model and as a leader-word for the incorporation of über into English. In English, the prefix is often spelled without the umlaut and it developed a meaning (‘super’) with which it is used productively in nouns like ubermodel (Naomi Campbell) or uberpolitician (Bill Clinton) and with adjectives like ubercool. Most interestingly, this use of ü/uber now spreads all over the world. We find examples in many languages. In Dutch, adjectives like übergezellig ‘super cozy’ or überleuk ‘super nice’ are most probably formed not on the basis of German but of English loans like übercool. And it looks like even German ‘reimported’ the prefix with the meaning ‘super’ and the connotation of ‘cool’ and ‘hip’ from English. The use of ‑gate in German and of uber- in English cannot be explained by systematic, language-internal factors alone. We have to call in language contact and borrowing for a proper description of the developments. Furthermore, the use of such new elements of word-formation often shows stylistic peculiarities. Finding a concise name for a scandal in ‑gate is especially useful for headlines, and many of the words formed with ‑gate are found in newspaper headlines or teasers. Also the prefix ü/uber- is found mostly in current popular culture, due to its connotations (‘cool, hip’). The productive use of these wordformation elements is, in other words, bound to certain text types. This is not an exceptional finding. Many word-formation patterns are tightly connected to certain registers and/or text types and their productivity has to be related to some extrasystemic factors. Take, for example, compounds consisting of a noun and a past participle. Adjectives of this type (like computer-controlled or analogy-driven) are very common not only in English but in other Germanic languages, too (Hüning and Schlücker ). The pattern itself is an old one: Koziol (: ) mentions Old English compounds like blōdgemenged ‘blood-mixed’, goldhladen ‘gold-decorated’, or handworht ‘hand-made’. According to Carr () and Koziol () this type has been used productively in Old English, but lost its productivity after that. Nevertheless, it never completely vanished from the Germanic languages. The first attestation of book-learned is from Middle English, and Shakespeare used adjectives like bloodstained. Adjectives of this type can be found in Old and Middle Dutch and German, too (like Dutch bloetdronken ‘bloodthirsty’). From the Renaissance onwards we find new formations in English as well as in German and Dutch. The real ‘revival’, however, only began in the nineteenth century (Marchand : ). This holds not only for English but also for German and Dutch. The development shows astonishing parallels in the different languages and it cumulates in an ‘explosion’ of this type of word-formation in the twentieth century. By now, its productivity seems almost unlimited, and we also find the same wide range of semantic possibilities relating the meaning of the base verb (in the participle) to the noun (van den Toorn ; Wilss ; Pümpel-Mader et al. ). Compare the following examples from German, Dutch, and English, illustrating only a small part of the (semantic) possibilities of this word-formation process:

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

 ()

 ̈  wassergekühlt, watergekoeld, water-cooled—‘cooled with water’ schneebedeckt, sneeuwbedekt, snow-covered—‘covered by/with snow’ instinktgesteuert, instinctgestuurd, instinct-driven—‘driven by instinct’ sonnengereift, zongerijpt, sun-ripened—‘ripened in the sun’ zukunftsgerichtet, toekomstgericht, future-oriented—‘oriented towards the future’

The massive increase of productivity in different languages almost looks like a conspiracy (we find productive use of this pattern in the Scandinavian languages, too). Speakers of different languages started to exploit this possibility in the same period and in a very parallel way. This convergent development cannot be explained by structural factors alone. Language structure does play a role, but only in a very general way. The pattern was able to become productive in Germanic languages because compounding is a prominent means of word-formation. Romance languages, on the other hand, will probably never use this pattern productively, since compounding is much more restricted in general. If we only look at the structural possibilities, in fact nothing really changed. The wordformation type has been around in Germanic for at least a thousand years and it has always (at least incidentally) been used for the formation of new words for at least five hundred years. But if we confine ourselves to such a systemic view, we miss one of the spectacular changes in the word-formation of Germanic languages, that is, the enormous increase in productivity of this pattern in recent times. For a proper understanding, we will have to consider extra-systemic factors. First of all, we will have to describe changes in the use of the pattern. In a recent corpus-based study on this word-formation type in English, Hilpert () has shown a substantial increase in token and type frequency and the rise of several highly prolific word families (involving the participles based, related, and oriented). The increasing use of compounds of this type also strengthens the pattern (or construction) as such (a process called ‘upward strengthening’ by Hilpert). Furthermore, it has been observed that the use of the adjectives in question is to a high degree differentiated according to text types (cf. for example van den Toorn ; Fleischer and Barz : ). A great many of these compounds are characteristic for special languages, in particular technical terminology. Der luftgekühlte Kondensator ‘the air-cooled condenser’, die rechnergestützte Analyse ‘the computer-aided analysis’, ein managementgeführtes Unternehmen ‘a management-led company’, schülerzentrierter Unterricht ‘pupilcentered teaching’—these are all technical terms, typically used in some kind of jargon. Hence, their use has to be related to the development of (technical) jargon from the nineteenth century onwards. As such, it is a phenomenon of special types of written language in the first place. However, through their use in the media, many of these compounds have by now penetrated the common language. Wilss (: ch. IX), in this context, talks about expressions that become common through a “Prozeß der Verwissenschaftlichung unserer Erfahrung” (‘a process of scientification of our experience’), which is, admittedly, very vague but nevertheless an interesting point. Such a development is not language-specific; it influences the use and the structure of all languages written and spoken under comparable cultural and social conditions. Through globalization and increasing language contact, it becomes understandable that speakers of related languages make comparable choices in their language use, which might explain the astonishing parallels in the changes just described.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



The interdependence of the development of certain text types and the productive use of certain word-formation patterns has not yet been investigated intensively and is not yet well understood. I am convinced, however, that it is necessary to look into these interrelations more thoroughly, at least if we want to come to a better understanding of what is going on in morphological change. I am aware that this exceeds the scope of most (formal) morphological theories. Nevertheless, I think that a good theory of language structure should be sensitive to such factors, since they determine to a large degree what speakers do with the structural possibilities of their language in actual use.

. C

.................................................................................................................................. Up to now, we do not have a dedicated ‘theory of word-formation change’ and I am not sure whether we need one. Scherer () wanted to restrict word-formation change to changes in productivity due to (changes in) structural constraints of the input and the output of word-formation patterns (cf. §.). To me, such an approach seems too restricted, as I have tried to show in this chapter. I also do not share the skepticism of Joseph, who doubts the feasibility of a general theory of morphological change because “all that can be done is to catalogue tendencies” (Joseph : ). His doubt is related to his conception of a ‘general’ theory as being necessarily a predictive theory, which, in my view, is a misconception. This brings me back to Haspelmath’s () critical remarks on theoretical frameworks, mentioned in the introduction. The idea that every theory has to be a predictive theory is an essential ingredient of linguistic theorizing. Descriptive and explanatory adequacy in the sense of the strict standards of (formal) theoretical models do not seem within reach in historical linguistics. Criteria like predictability or falsifiability, central to most formal models, are not applicable or only usable in a limited way for the description and explanation of language change. Accounts of morphological change “are generally retrospective only, looking back over a change that has occurred and attempting to make sense of it” (Joseph : ). More often than not, the best we can achieve is a ‘conjectural history’ in the sense of Keller (). Since there is no such thing as causality (in the strict sense) in language change, there simply is no right or wrong. The best we can do is: tell a plausible story that explains how a certain change might have happened. Every description or explanation of change has to be probabilistic by nature, and even that is difficult since our data are scarce (the speakers are dead, we do not have spoken data, the corpora are relatively small and unbalanced, etc.). Nevertheless, during the past decades there has been considerable progress with regard to this ‘story-telling’. In a wide range of subfields of linguistics, models of language change have been developed as well as new ways of testing these models against data. Improved methods have become available to use historical text corpora, also with regard to the study of word-formation. As stated in a programmatic article, written by a number of renowned scholars from different subfields of linguistics, “diverse approaches have begun to converge on a general framework that models language change as a dynamic population-based process, whereby speakers choose variants from a pool of linguistic variation in a way that is governed by

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 ̈ 

both social and cognitive constraints” (Hruschka et al. : ). Developing such a general model of language change is seen as an ambitious but achievable goal. Of course, the critical thoughts of Haspelmath also apply to the development of such a framework, but because of the interdisciplinary character of this endeavor, the risks seem to be lower. It is the methodological progress especially that makes such models of language change as a dynamic population-based process attractive and promising. This also applies to the study of morphological change. I have tried to single out some of the challenges that confront us when studying word-formation changes and changes in productivity, in particular. Interdisciplinary data-driven research promises a better understanding of what is going on in these changes. Thinking about language change in such a way will, of course, have consequences for grammatical theory. As Hruschka et al. (: ) put it: “Another challenge is to develop models of language structure that account for variability in use and are suitably dynamic to enable learning and change over time.” Theoretical models of language structure are usually concerned with the structural results of language change, but change is change in use. So, in order to ‘gracefully integrate’ the dynamics of language and language change, theoretical models will need to become more attentive to usage-based perspectives.

A I want to thank Jenny Audring and the two anonymous reviewers for many valuable comments on an earlier version of this chapter.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

                     ......................................................................................................................

 

. V:   

.................................................................................................................................. V is a property of living languages and a fundamental notion in linguistics. It reflects the fact that languages do not appear to be structurally homogeneous, at least superficially, and insights can already be found in Sapir (: ), where it is pointed out that variation characterizes languages not only cross-dialectally but also as far as idiolects, that is, varieties spoken by individual speakers, are concerned. According to Weinreich, Labov, and Herzog () variation is the norm rather than the exception crosslinguistically, and Labov’s () pioneering work has demonstrated that patterns of variation enrich our understanding about the way change occurs. The study of variation is closely related to historical linguistics (see e.g. Hüning, Chapter  this volume), where research is interested in the genesis of variation and tries to identify the processes that have triggered innovations, such as for instance analogy. Nevertheless, variation should be distinguished from change, since an innovative form may remain side by side with a previously existing form for a very long period. On the contrary, research on synchronic variation is centered on discovering the varying forms and structures of a specific linguistic system in a particular period and aims to deliver hypotheses on the factors that determine and constrain the observable phenomena. As a corollary, the researcher could test the strategies that underlie grammatical structures and thus improve the theoretical model he/she works on. Variation is due to various causes, both language-internal and -external. It is often manifested as sociolinguistic variation, or as geographic variation represented by dialectal diversity. It may also be register variation in the spoken/written dimension, as pointed out by Bisang (: ), which partially overlaps with social factors, in the sense that writing often favors certain linguistic structures to the disadvantage of others. Social factors such as

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

social status, age, gender, educational background, etc. can affect the linguistic behavior of speakers, but their examination goes beyond the scope of this work which proposes to investigate morphological variation in its synchronic dimension as a phenomenon of a principally language-internal nature. Nevertheless, it does not neglect the perspective of morphological variation triggered by language-external factors, as, for instance, the crucial role that language contact may play in linguistic change. More specifically, in this chapter, morphological variation is conceived of as referring to variable word forms or competing word-formation structures/patterns of a particular linguistic system, whose creation, retention, and distribution are governed by language-internal factors and constraints or by contact with another language. These variable forms and patterns involve morphologically complex words and manifest themselves through a variety of roots/stems, affixes, or other kinds of word structure. Language-internal factors often reflect mechanisms of change such as analogy, grammaticalization, reanalysis, etc. As will be seen in this chapter, it is difficult to trace a separation line between these diachronic factors and the synchronic, language-internal ones, such as differences in productivity, selectional properties, phonological conditions imposed by the base, etc. The chapter also focuses on the dimension of language contact, since intense contact may create innovative structures or enhance variation that has started for language-internal reasons. In a language-contact situation, borrowed material from a dominant language may co-occur, side by side, with native material of an affected language, depending on the distribution of power and linguistic prestige in a given situation. As argued by Léglise and Chamoreau (: ), the exact role and interplay of the notions of ‘variation’ and ‘contact’ have not yet been fully explored. Therefore, among other things, the chapter proposes to provide insights into filling this gap. It is worth noting that morphological variation relates to the complexification of morphology and grammar in general, since it creates redundancy, in the sense that more than one unit or more than one structure are utilized to express the same notion.1 Nonetheless, variation could also be seen as an intermediate stage to a simplification process, since the addition of innovative, transparent forms or structures may lead to a more regular rearrangement of grammatical structures and the prevalence of innovations may head an ultimate replacement of irregular or opaque forms. Space limitations do not allow me to provide a detailed analysis of the relation between variation and the issues of complexification and simplification. However, hints will be given in §., with illustrative examples from dialectal variation. To provide the reader with a clear picture of morphological variation and a better understanding of the argumentation, I will use examples from Modern Greek (hereafter Greek), a fusional language which is sufficiently described and analyzed, both diachronically and synchronically. It is rich in morphological structure, displays variation across the entire range of morphological processes, shows a particularly developed dialectal variety, and has also been in contact with well-described but typologically different languages, such as the semi-fusional Romance and the agglutinative Turkish. Data are drawn not only from the standard language but also from dialects, which diverge in significant and interesting ways and offer a rich and fertile territory where morphological variation can be profitably studied. For instance, dialectal variation allows us to draw conclusions on what lies behind the differences in morphological systems or helps us to determine which phenomena are 1

See Trudgill () for the notions of complexification and simplification in grammar.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



correlated with particular options and how these options are mapped onto the morphology of the language the dialects belong to. The chapter is structured as follows: In §., I introduce the concept of morphological variation. In §., I discuss in detail several types of morphological variation in the major morphological processes, that is, inflection, derivation, and compounding, and I demonstrate how language-internal tendencies alone, or assisted by language-external factors, can create variation in inflectional paradigms, derivatives, and compound structures. The chapter concludes in §. with a summary of claims and proposals, and hints for future research.

. V   

.................................................................................................................................. Variation in all its facets manifests itself both in morphological processes and components. There is variation in the form of units participating in morphological formations, but also variation in structures, in that a morphologically complex word may display competing internal structures with no difference in meaning. In this section, I argue that variation is not accidental and show that it can be due to various factors, mainly language-internal ones, though, to a certain extent, to language-external factors as well.

.. Variation in inflection As already noted, competition of different forms and structures is widespread in natural languages. It may occur even in the most productive morphological process, such as inflection, where specific nouns or verbs can inflect in more than one way (see Rainer  and Plag ). For instance, Paster (: –) provides evidence for overabundance (in the sense of Thornton ) and optional multiple exponence in the nominal morphology of Maay, a Cushitic language spoken in southern Somalia, where consonant-final nouns display three different realizations: one form ending in ‑o, one ending in ‑yal, and a third bearing both ‑o and ‑yal (e.g. yahas ‘crocodile’ > yahas-o / yahas-yal / yahas-o-yal ‘crocodiles’). Similarly, in Lesbian, the Greek dialect of the northern Aegean island of Lesbos, a considerable degree of overabundance is manifested in the verbal paradigm of the imperfect tense of deponent verbs (e.g. káθ-u-mi2 ‘I sit’ -> káθ-u-mna / kaθ-ó-mna / káθ-u-mdan / káθ-u-mdun / kaθ-ó-mdan / kaθ-ó-mdun ‘I was sitting’, where kaθ- is the stem, ‑o-/‑u-3 is the thematic vowel, and ‑mna/‑mdan/‑mdun the varying endings), as illustrated on the dialectal map of the Laboratory of Modern Greek Dialects of the University of Patras (http://lesvos.lmgd.philology.upatras.gr). 2 Greek forms are given a broad phonological transcription and stress is noted when needed for the argumentation purposes. 3 The form variation of the thematic vowel is only phonοlogical because in this dialect, unstressed /o/ becomes /u/ (see also footnote ). Thus, there is no real morphological variation but what varies is the stress position, which may trigger a change of /o/ into /u/. On the contrary, ‑mna, ‑mdan, and ‑mdun are morphological variants. See §... for details concerning the ‑dan forms.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Variation in inflection seems to run against the Paradigm Economy Principle proposed by Carstairs (), and generally the Language Economy Principle, which is often translated as Biuniqueness (one form/one meaning, Stork ).4 However, according to Rainer et al. (: ) the idea that morphological pleonasm should disappear is too simplistic, and profound research is needed in order to determine which conditions apply to the operation of economy. Interestingly, Stolz (: ) affirms that pleonastic morphology ‘dies hard’ and proposes that it can be characterized as natural if it corresponds to the overall structure of the examined language.5 In what follows, I show that explanatory factors such as a tendency for paradigm leveling, the increased productivity of certain operations, or even heavy contact with another language may cause or enhance variation in inflection, resulting in complexification, reduction, or reformulation of paradigms. Resistance to innovative tendencies in favor of inherited forms and patterns leads to the creation of long-standing variation in certain specific contexts.

... Paradigmatic leveling Consider the inflectional paradigm of mediopassive imperfect tense (past imperfective) of inflection-class II verbs (ICII)6 in Aivaliot, a Greek dialect of western Asia Minor—today western Turkey.7 It exhibits variation in the st and nd person singular, where forms in ‑dan alternate with older forms in ‑na. ()      

vastiémi ‘to be held’ vasti-ó-mna/vasti-ó-mdan ‘I was being held’ vasti-ó-sna/vasti-ó-stan ‘you were being held’ vasti-ó-dan etc. vasti-ó-mastan vasti-ó-sastan vasti-ó-dan

Here, vasti- is the stem allomorph in the mediopassive context, ‑o- is the thematic vowel, and ‑mna/mdan, ‑sna/‑stan, ‑dan, ‑mastan, -sastan are the portmanteau mediopassivepast-person-number endings. The innovative  and  forms, vastiómdan and vastióstan, have resulted from a tendency towards intra-paradigmatic leveling, triggered by the diffusion of ‑Dan (with

For deviations from Biuniqueness, see also Arkadiev and Klamer (Chapter  this volume). Stolz (: ) reports a remarkable case of double case-number-gender marking on Lithuanian definite adjectives, where the definiteness marker appears between two identical portmanteau morphemes (e.g. nauj-ą-j-ą lit. new-..--.. ‘the new’). 6 Synchronically, the Greek-based Aivaliot has two main verbal inflection classes depending on the presence or absence of a systematic X ~ Xi stem allomorphy (e.g. tim ~ timi of the verb timó ‘to honor’). ICII verbs originate from the old “contract” verbs (see Ralli  for more details on Greek inflection classes). 7 Aivaliot was spoken in the Asia Minor area of Kydonies (Aivali), before . Today, it is confined to dialectal enclaves of the Aegean island of Lesbos, inhabited by Aivaliot refugees. For information about this dialect, see http://lesvos.lmgd.philology.upatras.gr and Ralli (a, , ). 4 5

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



initial /d/ or /t/, depending on the phonological context)8 to all paradigmatic cells. Historically, ‑Dan originated from the rd person plural ending (‑nto in Ancient Greek) which marked the features of past, mediopassive, third person, and plural number. Its extension to the entire plural paradigm was probably prompted by a tendency to formally mark the distinction between the present and the imperfect mediopassive, according to which the first ends in /i/ while the second ends in /a/, as exemplified in (): ()

vastié/ómi ‘to be held’ a. Present b. Imperfect  vasti-é/ó-mi ‘I am held’ vasti-ó-mna/-mdan ‘I was being held’  vasti-é/ó-si ‘you are held’ vasti-ó-sna/-stan ‘you were being held’  vasti-é/ó-ti etc. vasti-ó-dan etc.  vasti-ó-mastin vasti-ó-mastan  vasti-é/ó-stin/-ósastin vasti-ó-sastan  vasti-ó-din vasti-ó-dan

Vasti- is the stem allomorph of the mediopassive context, ‑e-/‑o- are thematic vowels, ‑mi, ‑si, ‑ti, ‑mastin, ‑stin/‑sastin/‑din the mediopassive-person-number endings, and ‑mna/‑mdan, ‑san/‑stan, ‑dan, ‑mastan, ‑sastan, ‑dan the mediopassive-past-person-number endings. Propagation of ‑Dan to all cells caused a reduction of the range of its features. For instance, its spread to the singular number prompted the loss of the association with the plural. However, the phenomenon as a whole cannot be considered as a simplification of the paradigm because of the long-term coexistence of alternating forms in the first and second person of the singular number. Note that another type of variation exists in the present tense, this time with respect to the thematic vowel and the second person plural form. First, there is a proliferation of ‑o- in the singular number, where it tends to become the prevalent form (b). This may be due to an intra-dialectal tendency to enhance the formal distinction between the two inflection classes in the mediopassive voice, ICI and ICII, ICII being characterized by an ‑o- thematic vowel throughout the paradigm of the present tense. Compare the following examples: ()

a. ICI sózumi ‘to be saved’9 sóz-u-mi ‘I am saved’ sóz-i-si ‘you are saved’ sóz-i-ti etc. suz-ó-mastin suz-ó-stin/-sastin suz-ó-din

b. ICII vastié/ómi ‘to be held’ vasti-é/ó-mi ‘I am held’ vasti-é/ó-si ‘you are held’ vasti-é/ó-ti etc. vasti-ó-mastin vasti-é/ó-stin/-ósastin vasti-ó-din

8 -Dan is subject to voice assimilation, according to which /t/ appears after non-voiced /s/ and /d/ after voiced /n/ and /m/. 9 In Aivaliot, as in most Northern Greek dialects, /e/ and /o/ are raised to /i/ and /u/, respectively, because of a general phonological law raising mid-vowels in unstressed position. Moreover, another law deletes unstressed /i/ and /u/, unless they originate from unstressed /e/ and /o/. Thus, the underlying forms of sózumi, sózisi, sóziti are sózome, sózese, sózete. For vowel deletion, see also footnote 3.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Second, there seems to be a preference for pairing the  and  cells, as depicted by the present plural forms vastiómastin and vastiósastin, where the innovative form in ‑sastin tends to expel the older one in ‑stin. This pairing, also exhibited in the singular number of the imperfect tense as illustrated in (), seems to contradict Joseph’s (: –) claim that speakers tend to provide a generalization over the subjective second and third person forms by grouping together their paradigmatic cells in the formation of verbal forms.10

... Heteroclisis Inflectional variation closely relates to heteroclisis, that is, to the property of a lexeme to inflect according to more than one inflection class (Stump b). For Stump, heteroclisis should not be seen as arbitrary and lexically stipulated, but as structured and systematic. Nevertheless, athough for Stump the phenomenon is mainly aligned with some morphosyntactic feature distinction, for Maiden () it may have an original trigger other than morphology (e.g. phonology), but it is closely associated with the purely morphological properties of a language, namely patterns of stem allomorphy and other major organizational characteristics of paradigms. More particularly, Maiden investigates the heteroclitic paradigms of two verbs, a coase ‘to sew’ and a tese ‘to weave’, drawn from certain Romanian dialects (mostly from Ottenian of south-western Romania), which, in a number of cells, have lost their third conjugation inflectional characteristics in favor of those of the first conjugation. He shows that intra-paradigmatic diffusion of a conjugation-class change has been prevented from spreading to the entire inflection of these verbs, because of an idiosyncratic morphomic pattern of distribution reflecting a formal opposition between the stem allomorphs used in preterite, past participle, old conditional, and pluperfect, jointly, and those in the other paradigms (present, imperfect, subjunctive, imperative, infinitive, gerund). He concludes that the prediction that heteroclisis follows stem alternation, without morpho-syntactic conditioning, is worth exploring (Maiden : ). Along these lines, I intend to show that heteroclisis in the Greek varieties is morphologically motivated and correlates with stem allomorphy. Consider, for instance, in both Lesbian and Aivaliot, the two dialects mentioned above, the masculine nouns in ‑is11 (/i/ is phonologically deleted in unstressed position, see footnote ) which display inflectional variation in the plural number, where there are two alternating forms, one inflected according to the original ICII and an innovative form inflected according to ICI. For an illustration, take the inflectional paradigms of the nouns karv(u)ɲár(i)s ‘coal man’ and dzubáɲ(i)s ‘shepherd’:12 10

This claim has been formulated on the basis of a feature scheme proposed by Benveniste (), regarding person oppositions in terms of the features +/personal and +/subjective. 11 As shown by Ralli, Melissaropoulou, and Tsiamas (), with the exception of nouns ending in the latinate suffix ‑ari(s), nouns which transparently bear a derivational suffix are exempted from heteroclisis. 12 The underlying stem allomorphs (before the application of the dialectal phonological law of /i/ and /u/ deletion in unstressed position, see footnote ) for both nouns are karvuniari ~ karvuniariδ- and dzubaɲi and dzubaɲiδ-. The fact that the stem allomorphs contain a final /i/ which is phonologically deleted in unstressed position is, among other things, proved by the palatalization of the nasal /n/ which always occurs in front of /i/.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     ()



a. karv(u)ɲár(i)s b. dzubáɲ(i)s ICII ! ICI ICII ! ICI //. karvɲár-s dzubáɲ-s . karvɲár-ø dzubáɲ-ø //. karvɲárδ-is karvɲar-í dzubáɲδ-is dzubaɲ-í .13 —— ——

where karvɲar- and dzubaɲ- are the surface stem allomorphs in the singular number and karvɲarδ- and dzubaɲδ- those used in plural. As () shows, heteroclisis exists in the most marked number, that is, in the plural, where the extended allomorphic form, that is, the stem in ‑δ-, tends to be substituted by the shorter allomorph, that is, the stem without ‑δ-. Allomorphic reduction triggers a shift from ICII to ICI, that is, a transfer to the inflection class of nouns characterized by the absence of allomorphic variation. A change of inflection class also instigates a stress shift from the penultimate to the final syllable, driven by a need to have distinct forms in singular and plural. In fact, if stress had remained on the penultimate syllable (e.g. *karvɲár-i, *dzubáɲ-i), then /i/ would have been deleted in unstressed position (footnote ) and the two output forms would overlap, that is, the existing . forms karvɲár/dzubáɲ and the hypothetical //. form (also karvɲár/dzubáɲ). Crucially, the presence of heteroclisis in the plural of Aivaliot nouns conforms to Stump’s (b: ) claim that when two inflection classes are involved in heteroclisis, the ‘intrusive’ class is generally expected to occupy the most marked set of cells (in our case, the plural number). Nevertheless, it also confirms Maiden’s assertion that heteroclisis is morphologically conditioned and related to stem allomorphy, since reduction of stem allomorphy in the Lesbian/Aivaliot masculine nouns in ‑is may determine an inflection class (in our case ICI) as more privileged towards the other and define it as the intruder. Stem allomorphy is particularly heavy in Greek and its dialects, due to the long history of the language, and it is one of the decisive factors for assigning both nouns and verbs to inflection classes.14 As noted by Ralli (), the property of many Greek nouns to have distinct allomorphs in the singular and plural may also act as a blocking factor to innovation, probably because allomorphy preserves traces of the old inflection. Thus, in the examples depicted in (), it may explain the resistance to an overwhelming prevalence of the ICI form and the long coexistence of variants (the first attestation is from the seventeenth century), that is, the older ICII forms and the innovative ICI ones, in the plural number.

... Inflectional variation and language contact In this section, Cappadocian, another Asia Minor dialect, exemplifies how a languagecontact situation may trigger or enhance inflectional variation. Cappadocian came under Turkish influence following the Seljuk invasion in the eleventh century, and the subsequent 13

The overt form of genitive plural has been lost from most dialectal nouns. According to Ralli (, ), Greek nouns are distributed over eight inflection classes on the basis of their gender values and the presence or absence of stem allomorphy. For verbs, see footnote . 14

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

conquest of Asia Minor by the Ottoman Turks in the fourteenth century (see, among others, Dawkins  and Janse forthcoming). Consider the following sample of Southeast Cappadocian nominal inflection (b), compared to the corresponding Standard Modern Greek (SMG) forms (a), taken from Janse (forthcoming) and Ralli (): ()

a. SMG fito. ‘plant’ Singular Plural // fit-o fit-a  fit-u fit-on

b. Southeast Cappadocian15 Singular Plural fito-ø fit-a/fit-ja/fito-ja fit-u/fit-ju/fito-ju fit-u/fit-ju/fito-ju

// 

jineka. ‘woman’ Singular Plural jineka-ø jinek-es jineka-s jinek-on

Singular neka-ø neka-ju

Plural neka-ja / nek-es neka-ju / nek-ez-ju

  

anθropos. ‘man’ Singular Plural anθrop-os anθrop-i anθrop-o anθrop-us anθróp-u anθrop-on

atropos-ø atropos-ø atropoz-ju

atropoz-ja atropoz-ja atropoz-ja-ju

As depicted in (b), Cappadocian nouns display a high degree of variation. More particularly, the features of plural and genitive are not fused together under the usual portmanteau morpheme ‑on, as in SMG (a) but are realized by distinct markers, which, in some cases, are added to the base, one after the other, as for instance in nek-ez-ju ‘woman--, women’ and atropoz-ja-ju ‘man--, men’. To be more specific, in (a) the plurals of ‘plant’, ‘woman’, and ‘man’ are obviously built on the combination of a stem ( fit-, jinek-, and anθrop-) and a portmanteau inflectional suffix expressing the features of case and number. By contrast, in (b), there is a variety of forms ranging from the original fit-a ‘plants’ and nek-es ‘women’ to the innovative fit-ja, fito-ja, fit-ju, fito-ju, neka-ja, neka-ju, nekez-ju, atropoz-ja, atropoz-ja-ju, depending on the case. There seem to be three major points of interest. First, in Cappadocian inflection, grammatical gender has lost its formal distinction in masculine, feminine, and neuter values—at least in most nominal paradigms—in favor of the neuter form. This change has facilitated the spread of a plural marker ‑ja, originating from a reanalysis of Greek neuter nouns in ‑i (e.g. mat(i)16 ‘eye.’, matj-a ‘eye.’, reanalyzed as mat-ja), to nouns such as atropos ‘man’ and neka ‘woman’ (for this change see also Janse  and Ralli ). Although the original gender distinctions have not completely faded in Cappadocian (see Dawkins  and Karatsareas , ), the demise of morphologically discrete gender values brings Cappadocian closer to Turkish, where gender is not distinguished grammatically. Second, the Greek strategy to build inflected forms on the basis of stems (Ralli ) seems to have lost its pervasive application, as revealed by the existence of plural forms such as fito-ja, neka-ja, and 15 Southeast Cappadocia (the towns of Semendere and Ulağaç) is the area with the most significant linguistic changes. 16 As in Northern Greek Dialects, the Cappadocian unstressed /i/ surfaces in stressed position.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



atropoz-ja, which are based on the entire word form, as used in the nominative singular. This change is also reminiscent of Turkish, as exhibited in (), where the inflection of the word ‘man’ is given in both Turkish and Southeast Cappadocian. () NS GS NP GPL

a. Turkish adam adam-ın adam-lar adam-lar-ın

b. Southeast Cappadocian atropos atropoz-ju atropoz-ja atropoz-ja-ju

Third, there are traces of an innovative agglutination pattern [-P-G], as shown by the plural form atropoz-ja-ju and nek-ez-ju, where the original fusional character of inflection (portmanteau endings in ‑i and ‑es, respectively) has been replaced by an agglutinative one, typical of Turkish (see (a) above), where the plural marker (‑ja- or ‑ez) is followed by a distinct genitive case marker ‑ju. This change suggests a restructuring of the inflectional endings: ‑ju must have been deprived of its original singular number value, since it appears to be preceded by a plural marker (‑ja), and ‑ja itself could no longer depict its nominative/accusative/vocative syncretic case values (see (a)), because it is followed by the genitive marker ‑ju. One cannot conclude, however, that the entire Cappadocian nominal system has turned agglutinative, since there are many nouns which do not show any agglutination. What (b) illustrates instead is a hint of a possible ongoing change in the late nineteenth and twentieth century, where many innovative forms coexist with previously existing ones. Crucially, fusion still appears in some Cappadocian nouns, as shown by the inflected form fita (b), which combines the bound stem form fit- with the portmanteau ending ‑a. On the basis of the above observations, Cappadocian could, thus, be considered as a typical example of a language where variation may be interpreted by appealing to a language-external cause, that is, contact. Nevertheless, according to Karatsareas () the change in gender and Cappadocian nominal inflection in general goes back to a common Greek-based linguistic ancestor of most Asia Minor varieties. He claims that it had started for language-internal reasons but has been accelerated because of heavy contact with Turkish.17 Regardless of the assumed extent of Turkish influence on Cappadocian inflection, that is, whether it instigated or just enhanced change, one should not neglect the important role of language-contact as a factor in the existence of variation.

.. Variation in derivation It is often the case that a derivational process for creating words of a specific category may exploit more than one formative, as a result of various reasons. For instance, Eggert () investigates the high degree of variation exhibited in the construction of French inhabitant 17

Karatsareas does not consider contact with Turkish as the initiating trigger of changes, but sees it as a catalyst that pushed ahead developments already under way. He claims that the earliest manifestations of these developments predate the Turkish invasion and go back to Medieval AsiaMinor Koiné, the common ancestor of the Asia Minor dialects (among which are Pontic, Pharasiot, and Cappadocian).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

names, where about ten different suffixes are productively used. He concludes that the selection of a particular suffix is not made at random but is principally related to the formal shape of the base/stem. He convincingly shows that, although not all possible cases can be predicted, one can delineate the use of certain suffixes to the exclusion of others. In §..., I examine a similar case in Greek regarding the formation of feminine nouns denoting professions, where different suffixes may be employed, the specific choice of which depends on the type of the base they are combined with. Obviously, this is a different type of variation from the one discussed in §... In inflection, the same stem may take different inflectional suffixes, resulting into overabundance in Thornton’s () terms, while in derivation there is variation in the strategy itself.

... Producing feminine professional nouns As reported by Triantaphyllides (: ), the Greek suffixes for creating feminine professional nouns are chiefly the following: the agentive ‑tria (xoref-tria ‘female dancer’) and its low-register form ‑tra which often bears a pejorative connotation (e.g. xartopex-tra ‘female card player’); -ina18 (e.g. δikast-ina ‘female judge’); ‑isa (majir-isa ‘female cook’); ‑u (e.g. taksidz-u ‘female taxi driver’). Competing with the use of an overt derivational suffix, the language has two more operations for building the feminine form of professional nouns: (a) conversion, according to which masculine stems become feminine (e.g. δaskal-os ‘teacher’ vs. δaskala19 ‘female teacher’); most often, converted nouns do not display a different ending from the original masculine forms but they are assigned the feminine gender with the help of a feminine article (e.g. o. iθopios ‘the actor’ vs. i. iθopios ‘the actress’). (b) Phrasal-compound formation20 by pre-posing the feminine word jineka ‘woman’ to masculine nouns (e.g. jineka vuleftis ‘woman deputy’). Conversion with or without the assistance of the feminine article is a native strategy, as shown by a plethora of examples throughout the long history of Greek (Triantaphyllides : –). The second operation, entailing the application of phrasal compounding, implies the combination of two independent words, as opposed to the combination of stems or a stem and a word, as is the case with typical Greek compounds (Ralli a). This last strategy is relatively recent: it has been imported from languages Greek has been in contact with, that is, French, English, or Italian (e.g. French femme auteur ‘woman author’, English woman deputy, Italian donna poliziotto ‘woman policeman’), at the beginning of the last century (see AnastasiadisSymeonidis  for details). The diversity of derivational operations makes the construction of Greek feminine professional nouns a very interesting topic, since the selection of a particular operation is largely determined by linguistic, but also extralinguistic, constraints. Crucially, the appearance of a specific suffix depends, to a large extent, on the properties of the base, and very few cases can be characterized as unpredictable. More specifically, the grammatical 18

-ina is of Italian origin. It is not exclusively used for building feminine professional nouns, but also appears for the formation of nouns denoting female animals (e.g. provatina ‘sheep. ’ < provato ‘sheep.’). 19 The -a of δaskala is not a derivational suffix but part of the feminine stem (Ralli ). 20 For detailed information about phrasal compounds in Greek, see Ralli (a, b).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



category of the stem divides the above-mentioned suffixes into two major categories: suffixes attached to nouns and suffixes combined with verbs. The suffixes ‑ina, ‑isa, and ‑u belong to the first category (),21 while ‑tria or ‑tra are part of the second (). ()

()

a. efoplisti(s)22 shipowner b. astinomik(os) policeman c. mavraγoriti(s) black marketeer d. taverniari(s) tavern owner e. taksidzi(s) taxi-driver f. peripter(as) kiosk owner

>

a. δiorθon(o) to correct b. erminev(o) perform c. xartopez(o) play cards d. rav(o) to sew

δiorθο-ti(s) corrector erminef-ti(s) performer xartopex-ti(s) card player raf-ti(s) tailor

> > > >

> > > > >

efoplist-ina woman-shipowner astinomik-ina policewoman mavraγorit-isa woman black-marketeer taverniar-isa woman tavern-owner taksidz-u woman taxi driver peripter-u woman kiosk owner δiorθo-tria woman corrector erminef-tria woman performer xartopex-tra woman card player raf-tra woman tailor

In fact, as recently shown by Koutsoukos and Pavlakou (: –), ‑tria is directly attached to verbal bases in order to produce feminine agentive nouns, without the intermediary derivation of masculine nouns (nouns ending in the masculine suffix ‑ti(s)). Their main argument is that feminine nouns do not always have the same semantic properties as their male counterparts (e.g. enisxi-tis [human] ‘amplifier’ < enisxi(o) ‘to boost’ vs. enisxitria [+human] ‘woman who boosts’) and that there are feminine nouns in ‑tria without a masculine counterpart (e.g. plektria ‘woman who knits’ vs. *plektis). Note that a subdivision within the group of feminine nouns with an overt suffix is also possible, this time on structural criteria, since ‑ina, ‑isa, and ‑u seem to be sensible to the type of word-internal structure of masculine professional nouns. Systematically, ‑u selects nouns bearing the native professional suffix ‑a(s) in their masculine form (e.g. milon‑a(s) ‘miller’ vs. milon‑u ‘woman miller’) or the Turkish-based suffixes ‑dzi(s) (e.g. gafa-dzi(s) ‘blunderer’ vs. gafadz-u ‘woman blunderer’) and ‑li(s) (e.g. bela‑li(s) ‘trouble maker’ 21

There are some extremely rare exceptions, where a nominal base is not presupposed: (i) nixtoperpat(o) > nixtoperpat-u (masculine noun: nixtoperpatiti(s)) walk at night woman night walker 22 To aid clarity, overt inflectional endings are included in parentheses and hyphens divide the stem from the derivational suffix.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

vs. belal‑u ‘woman trouble-maker’). Similarly, ‑isa shows a preference for bases exhibiting the native agentive ‑ti(s) (e.g. man‑ti(s) ‘clairvoyant’ vs. the feminine mant‑isa) or the latinate ‑ari(s) (e.g. ajelaδ-ari(s) ‘cowboy’ vs. ajelaδar‑isa ‘cowgirl’). As for ‑ina, it shows no particular preference for suffixed bases, but in some cases, formations in ‑ina are interchangeable with those in ‑isa (e.g. nomarxi(s) ‘prefect’ vs. nomarx-ina/nomarx-isa ‘woman prefect’). It is worth noticing that in Greek derivation, morphological variation may persist because of sociolinguistic factors. It is often the case that innovative forms prevail in informal linguistic situations, while predilection for an ancient-like style of language favors older forms as being more prestigious than the commonly used informal ones. For instance, the conversion strategy with the use of the feminine article (e.g. i. vuleftís ‘the woman deputy’) often appears in formal registers, while the common form with the ‑ina suffix (vuleftina), is rather associated with an informal language style. As for the relatively recent phrasal-compound creation, jineka vuleftis ‘woman deputy’, it competes with the converted form in the same contexts (i. vuleftís), but it tends to become prevalent due to an increasing influence of English. As already mentioned in §., space limitations do not allow me to give a detailed account of the sociolinguistic trigger of morphological variation and its impact on the use of variable forms.

... Derivational variation and language contact In what follows, I will offer evidence that language-internal tendencies may create or constrain variation among innovative derivative forms introduced by language contact. To this purpose, I investigate the adoption of Turkish verbs in Aivaliot, which are integrated into Aivaliot morphology as loanblends (Haugen ), in that they contain a copied part from Turkish and a Greek part with, optionally, an integrating element ‑iz- and the person/number inflectional ending. Interestingly, the integrating element is nothing but the Greek verbalizer ‑iz- which productively produces verbs out of native nouns. Its use or non-use distributes verbal loans into two groups, each group belonging to a distinct inflection class (IC): those with the verbalizer ‑iz- inflect according to ICI (), while those without the verbalizer () belong to ICII. Consider the following examples: ()

a. Aivaliot burdiz(u) to twist b. kudurdiz(u) to be particularly active

Turkish bur(mak)23 to twist

Greek/Aivaliot Greek/Aivaliot verbalizer inflection -iz-u.24

kudur(mak) to go mad

c. daldiz(u) dal(mak) to be absent-minded to dive/plunge/be absent-minded

23

-mAk is the Turkish infinitival marker. In Modern Greek, there are no overtly expressed infinitives. Citation forms are given in the first person singular of the present tense. 24

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     ()



a. Aivaliot katsird(o) to escape

Turkish Greek/Aivaliot verbalizer Greek/Aivaliot inflection kaçır(mak) ø -o. to take away/kidnap

b. savurd(o) to throw

savur(mak) to throw

c. axtard(o) aktar(mak) to overturn to transfer/mix Interestingly, there are also several alternating types, suggesting a random selection between the two integration strategies, as () illustrates: () Aivaliot ICI ICII axtardiz(u) / axtard(o) to overturn sakindiz(u) / sakind(o) to move aside, avoid psxurdiz(u) / psxurd(o) to sprinkle, spray

Turkish aktar(mak) to transfer/mix sakın(mak) to avoid püskürt(mek)

Along the lines of Ralli (a), I suggest that variation in the integration strategies, that is, with or without the use of an integrating element, is due to the interference of some basic morphological properties of the recipient language, that is, Greek/Aivaliot, which govern and constrain the shape of the borrowed forms. As commonly accepted, lexical borrowing with respect to verbs is based on the third person singular (Matras : ). Thus, the borrowed Turkish form in Aivaliot is that in ‑DI, as depicted in (): ()

Turkish past tense of burmak ‘to twist’ -> Aivaliot formation burd(i)-iz-u  bur-dı-m  bur-dı-n  bur-dı-ø  bur-dı-k  bur-dı-nız  bur-dı-lar

where bur- is the root, DI marks the past tense and the suffixes following DI are the personal endings. The choice of the third person singular is not surprising: it is widely observed in borrowing across languages, and, in this particular case, it is favored by the absence in Aivaliot (and generally in Greek) of an overtly expressed infinitival form. What is interesting though is the reanalysis of the fully inflected  type, which has turned into a noninflected and non-tensed stem in order to be combined with the Aivaliot inflectional endings, as well as the exclusion of the present tense form and the adoption of the past tense (aorist) one. As proposed by Ralli (a), the particular reanalysis has occurred

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

because of the requirements of Greek morphology to form words by combining stems, that is, bound elements, with inflectional endings. As for the choice of the particular stem, it is due to the fact that deverbal derivation in Greek is usually based on the so-called ‘aorist stem’, that is, on the stem allomorph which is used in the context of past tense and perfective aspect. Loan-verb formation does not escape this property, since it also belongs to derivation. An important question which arises now is why the two integration strategies, that is, with or without the integrating element, occur side by side, and often alternate with respect to the same verb. Again, the explanation is found in the morphological properties of the recipient language. As already mentioned, the examples in () differ from those in () in that they lack the verbalizer ‑iz- and they inflect according to ICII (verbs in () belong to ICI). Crucially, Greek verbal inflection classes are distinguished only in the present and the imperfect tenses, while there is no formal difference in the past perfective context (aorist tense). In the same way, verbs bearing the verbalizer ‑iz- (ICI) and those without (ICII) display the same stem final vowel /i/ in the aorist tense, as revealed by the comparison of (a) and (b): () a. ICI xoriz-i ‘(s)he dances’ Aorist (past perfective) xori-se ‘(s)he danced’ b. ICII ektim-a ‘(s)he estimates’ Aorist (past perfective) ektimi-se ‘(s)he estimated’ Since verb-borrowing from Turkish is based on the past tense, where the Greek stem allomorphs are formally identical as far as the stem-final vowel is concerned (at least with respect to ICI verbs in ‑iz- and those of ICII), the loanblend is ambiguous for inflection class. Therefore, both strategies for marking the other tenses—with or without an integrating element—are equally compatible and can be used by Aivaliot speakers.25

.. Variation in compounding Compounding is a particularly interesting domain because it exhibits structural variation, that is, variation in the way the two constituents are combined. In what follows, I show two cases of compound variation, triggered by language-internal reasons and enhanced by contact. Both cases are related to the notion of headedness which plays a crucial role in compounding, since the head transfers its category and other morphosyntactic and semantic properties to the compound as a whole (Scalise and Fábregas ), and the presence or absence of a head constitutes the distinction between endocentric and exocentric compounds.

... Fluctuating between endocentric and exocentric compounds The Greek language is particularly rich in compounds, endocentric and exocentric, root and synthetic, determinative and coordinative (Ralli a). Compounds are morphologically built. The main characteristics are single stress (compounds are phonological words), 25 In cases where only one strategy is adopted, there is a slight preference for the use of the integrating element. This may also be due to language-internal reasons, since verbs of ICI are more productively built than those of ICII.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



stem constituency,26 right-headedness, and a compulsory presence of an internal compound marker ‑o- linking the two constituents. Compounds containing a dependency relation (modification, attribution, or complementation) between their constituents display a particularly rigid order, according to which the head follows the non-head. However, a handful of examples seem to contradict this inflexible order, as () illustrates: ()

a. kefaloponos versus lit. head ache ‘headache’ lemoponos lit. neck ache ‘neck ache’ oδondoponos lit. tooth ache ‘toothache’ karδioxtipi lit. heart beat ‘heartbeat’

b. pon-okefalos lit. ache head ponolemos lit. ache neck ponoδondos lit. ache tooth xtipokarδi lit. beat heart

In fact, the examples of the first column (a) freely alternate with those of the second (b), while there is no semantic difference between the two. In an effort to explain this rare variation, Ralli (a) proposes that the examples in (a) are typical endocentric formations, created according to the common N N pattern, where the first noun modifies the second. For example, kefaloponos is the ache (noun pon-os ‘ache-’) of the head (the noun stem kefal-), and the interpolated –o- between the two constituents is the compound marker. On the contrary, the examples of (b) are exocentric formations, built analogically to an archaic V N compound pattern, which still subsists today although not productively used, according to which a first verbal constituent (e.g. the verbal stem pon- ‘to ache’) is followed by its complement (the noun stem kefal-).27 Being exocentric, the formations of (b) are combined with a zero derivational suffix which transforms the structure into a noun (e.g. [[[V-o-N]V-ø]N-os]N)—the final ‑os (similarly to (a)) being the case (nominative)-number (singular) inflectional ending.28 The hypothesis about the exocentric V N structure of (b) is supported by the fact that pon- and xtip- are two of the rare Greek cases which share a common stem in both their noun and verb realizations. Compare, for instance, the inflected types of the first person singular of the present tense xtip-o ‘I beat’

The main morphological patterns of Greek compounds are [stem stem] (e.g. [[nixt-o-luluδ]-o] ‘night flower’) and [stem word] (e.g. [domat-o-salata] ‘tomato salad’). See Ralli (a) for more details. 27 V N was particularly productive in Ancient Greek, while today, V N exocentric formations are either relics or analogically produced. For example, [[[misV-o-jinN]-i]N-s] ‘misogynist’ is an ancient formation, while [[[xasV-o-merN]-i]N-s] ‘who loses (xas-) its day (mer-), loafer’ is a modern creation. 28 According to Ralli (a) and Ralli and Andreou (), exocentric formations are also headed, but they differ from endocentric ones in that their head is a derivational suffix which is located outside the combination of the main compound constituents. This suffix may be overtly realized, as in [[[misV-o-jinN]-i]N-s] ‘misogynist’ or non-realized, as in [[[ponV-o-kefalN]-ø]N-os] lit. ache head ‘headache’. 26

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

and pon-o ‘I ache’ with the corresponding nouns in their nominative singular form pon-os ‘ache’ and xtip-os ‘beat’. The morphological variation described above is due to language-internal causes: it results from the categorial ambiguity of stems, such as pon- and xtip-, and the availability of two compound structures, the very productive N N pattern and the less productive but still available V N pattern. For this type of variation speakers show no particular preference for one of the two structures. However, the scarcity of cases where verb stems coincide with nominal ones in Greek and the rarity of exocentric compounds based on V N combinations make alternations such as those of () particularly unusual.

... Compound variation and language contact Compared to inflection and derivation, compounding is the least affected domain in language-contact situations. Dialectal evidence shows that the Greek property of rightheadedness strongly prevails even in those dialects affected by languages with left-headed compounding like, for instance, in Heptanesian, the dialect of the Ionian islands which has been under Italo-Romance influence due to a Venetian domination of the islands for about two to four centuries, depending on the island. However, there are some sporadic instances of left-headed N N formations, most of which are found in Greko, the Greek dialectal variety spoken in Southern Italy (Grekanico) in the area of Calabria. Consider the following examples from Greko, SMG, and Aivaliot: () a. Greko fiḍḍámbelo lit. leaf vine klonόsparto lit. twig crop ššilopόtamo lit. wood river sporomáratho lit. seed fennel

b. SMG ambelόfilo lit. vine leaf spartόklono lit. crop twig potamόksilo lit. river wood maraθόsporos fennel seed

c. Aivaliot abilόflu lit. vine leaf spartόklunu lit. crop twig putamόkslu lit. river wood maraθόspurus fennel seed

The left-headed types of (a) are particularly striking, since Greko, being a dialect of Greek origin, is not expected to exhibit left-headed compounds. In fact, the corresponding formations in SMG and the other dialects, for example Aivaliot, are right-headed. A plausible hypothesis would be to assume that Greko has been influenced by Romance, that is, Italian and the local Romance varieties where N N compounds are mainly leftheaded, as depicted by the examples of (), taken from Scalise (): ()

N N Italian compounds scuola guida < scuola driver school school capostazione < capo station master master pescecane < pesce lit. fish dog ‘shark’ fish

guida driver stazione station cane dog

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

    



Crucially though, left-headedness does not appear in Griko, the Salento variety of Grekanico, and it has not completely replaced right-headedness in Greko compounding, since there are still occurrences with the head on the right. In some scarce examples, the same noun may be located on the left or on the right, depending on the example one deals with. For instance, skordófiḍḍo ‘garlic leaf ’ and fiḍḍámbelo ‘vine leaf ’ have the same head, fíḍḍo, but in the case of skordófiḍḍo, fíḍḍo is located on the right, whereas, fiḍḍámbelo exhibits left-headedness. Instead of postulating a change introduced by contact with Romance, I side with Andreou () that in Greko, left-headedness has been inherited from Ancient Greek, where it was a structural possibility, although it has been less productive than rightheadedness. For instance, examples of left-headed N N compounds can be found in Ancient Greek writers such as Aeschylus (e.g. θεοινος /theoinos/ Fr.  ‘God of wine’ < the- ‘God’ + oinos ‘wine’). On the basis of this hypothesis, one can assume that in Greko, the phenomenon has been enhanced through heavy contact with Romance, contrary to what happened in SMG and Aivaliot, where Romance had only a small influence. In other words, intense language-contact has fostered and strengthened compound variation which was created for language-internal reasons. It is significant that rare examples of left-headed compounds can also be detected in some other Modern Greek dialects which still keep a number of archaic features and have been under Romance influence, as for instance, in Cypriot (e.g. rizafti lit. ear’s (afti) root (riz-)), a dialect affected by both French and Venetian.29 Refuting the hypothesis that Romance has caused the introduction of leftheadedness in these dialects is further supported by the fact that the change in the position of head constitutes only a partial structural change in compounding, since the general structure of Greko compounds remains Greek: as opposed to Italian N N compounds which are composed of two independent words (), Greko compounding conforms to Greek compounding patterns in that it combines stems and shows a compound marker between the compound constituents. Additional proof for the claim that left-headed N N compounding may originate from Ancient Greek comes from Griko, where there are no left-headed compounds, although Griko has also been heavily affected by Romance. As shown by Rohlfs () and Karanastasis (), Greko has been more isolated than Griko, and hence it has kept archaic Greek features, among which, I suppose, are the left-headed N N structures.

. S

.................................................................................................................................. In this chapter, I have offered a panorama of case studies dealing with variation in inflection, derivation, and compounding. I have concentrated on monolingual and dialectal data, and suggested that language-internal factors prevail in producing and constraining morphological variation. To this end, I have drawn evidence from Greek, a language that displays a considerable degree of variation, particularly dialectal, and has been sufficiently 29

The examples of this section are drawn from Andreou () where more evidence is provided about Greko, Cypriot, and Ancient Greek.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

studied on both diachronic and synchronic grounds. I have further investigated the emergence of variation in language-contact settings, namely, in those Greek dialects that have been affected by typologically different languages, such as the agglutinative Turkish and the semi-fusional Romance. I have argued that ongoing variation may pre-date contact, and thus it is hard to tell whether linguistic variation in contact settings is due to contact, to internal linguistic processes, or to both. Integrating the study of linguistic variation with those of theoretical morphology, dialectology, and contact-induced change is an ambitious goal. I hope to have provided hints for future research aiming to advance the study of morphological variation.

A The content of this chapter is the result of research that has been co-financed by the European Union (European Social Fund—ESF) and the Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF)— Research Funding Program ARISTEIA (Excellence). Investing in knowledge society through the European Social Fund (–, Program MORILAN Morphology in language contact situations: Greek dialects in contact with Turkish and Italian, URL: www.morilan.upatras.gr).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

                      ......................................................................................................................

 

. I

.................................................................................................................................. T chapter describes how observations on child language development can inform morphological theory. Child data can be revealing about language structure (Chomsky , ), but at the same time, these data “can at best provide supporting evidence only” (Sinclair ). The reason is that a myriad of factors constrain language development (Bates ). The focus of this chapter is on complex, inflected words. Among the different morphological processes, the development of inflection is by far the most researched. Because our understanding of the factors that influence the development of inflection is reasonably advanced, child data on inflection can inform morphological theory. More specifically, child data on inflection have been used to address the question whether or not morphologically complex words are stored in full. This theoretical debate—famously known as the Past Tense Debate—revolves around the question of whether or not multimorphemic words are stored as units similar to monomorphemic words (single-route processing), or are computed from their parts (dual-route processing). It addresses a fundamental issue that is at the heart of morphological theory. In §., I will show that data on child language can often not be taken at face value, and should be interpreted with care. Although caution is required in interpreting children’s errors with inflection, their errors and developmental paths have been a rich source of information for morphological theory, as will be demonstrated in the remainder of this chapter. In §., the two opposing positions in the Past Tense Debate—the two views on the processing of morphologically complex forms—will be explained, as well as the relevance of this debate for morphological theory. Sections . and . focus on two different developmental errors that featured in the Past Tense Debate: the failure to realize a morpheme in obligatory contexts (omission) and the use of a morpheme where it should not be used (commission). The Past Tense Debate has

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

mainly revolved around English past tense and noun plurals in typically developing children. English is a language with relatively little inflection and it is important to consider findings from languages with richer inflectional systems. Findings from crosslinguistic research are discussed in §.. Atypically developing populations are discussed in §.. The chapter closes with some concluding remarks and avenues for further research in §..

. L     

.................................................................................................................................. Inflection is an interface phenomenon that requires the interaction of morphology, phonology, syntax, and semantics. As a consequence, the development of inflection is influenced by development in these other domains. An omission error that has received much attention is the omission of tense-marking inflection in early child English, for example, He play football (Cazden ; Brown ). What may cause a child to use a bare verb stem instead of the correct inflected form? Suppose that He play football refers to an event in the past and that the child who uttered this sentence dropped the past tense ‑ed (and not third person singular ‑s). Possibly, he does not know the rule for past tense inflection in English. In order to test this hypothesis we conduct a small experiment with nonsense verbs. We show the child a picture of a man swinging an object and say “This is a man who knows how to rick (/rik/). He is ricking. He did the same thing yesterday. What did he do yesterday? Yesterday he ____.” (Berko ).1 The child responds with rick and we conclude that he indeed does not know the rule for regular past tense inflection. The above situation seems a straightforward example of incomplete morphological development. However, child language research has indicated that there is a lot more to correctly using English tense inflection than acquiring morphological rules. For instance, children tend to drop verb inflection more frequently when utterances are longer and more complex (Song, Sundara, and Demuth ; Owen ). Thus, in a taxing task, a young child may drop inflection, even though he has some knowledge of the inflectional rule. This effect may be strongest for young children who are inexperienced and have limited cognitive resources. In addition to this, specific phonological, syntactic, and semantic effects have been reported to have an impact on inflection development, as will be discussed below. Omission of inflection happens more frequently when the final consonant cluster is complex as in walked or jumps (Marshall and Van der Lely ; Song, Sundara, and Demuth ) and when the final sound of the verb stem resembles the past tense suffix as

1

Note that in Berko’s study with novel verbs children participated who were between the ages of four and seven. More recent research indicates that novel verb experiments to test the productivity of inflection can be conducted with children as young as two years of age (Theakston, Lieven, and Tomasello, ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



in ended (Berko ), showing that phonological factors are important. Also, children drop verb inflection more often when the verb is in medial position compared to sentence-final position (Hsieh, Leonard, and Swanson ; Dalal and Loeb ; Song, Sundara, and Demuth ), which may reflect the effects of limited time to plan and perform the articulatory gestures for verbs in medial position compared to final position. In medial position, the time to perform the gestures involved in articulating inflection is constrained because there are other words following the verb, in contrast to the final position. Also, there are fewer words preceding the verb in medial position, hence less time to prepare the articulation of inflection. Other findings reveal contingencies between the verb form and the sentence subject. Children’s sentences with an uninflected verb often lack a subject, for example Play football (Sano and Hyams ), or contain subject case-marking errors, e.g. Him play football (Schütze and Wexler ). These properties—amongst others—have led researchers to the conclusion that children’s uninflected forms are infinitives instead of incorrect bare verbs with a dropped inflectional suffix (Wexler ; Blom ). Children’s frequent use of infinitives, in turn, has been attributed to syntactic maturation (Rizzi ; Wexler ) rather than a lack of morphological knowledge (Wexler ).2 A third set of factors is semantic in nature. In early child language, aspectual and modal distinctions determine the use of tense and agreement inflection (Hoekstra and Hyams ). These constraints could reflect input distributions (Freudenthal, Pine, and Gobet ), but semantic patterns in children’s use of inflection could also be modulated by conceptual development. Along these lines, Weist () argued that the acquisition of past tense markers parallels the development of the concept of time. Brown () pointed to the role of conceptual complexity. For example, the third person singular ‑s in English may be acquired late because third person, singularity, and present tense are expressed simultaneously. Also, the occurrence of ‑s in other contexts—as a noun plural (books) or possessive (John’s book)—may add to the difficulties in mapping form and function. In conclusion, understanding why a child fails to use inflectional morphology requires a multifactorial approach. First, in a developing system hardly anything can be assumed to be constant, and it is important to take this into account. Second, the correct use of morphology requires sufficient processing abilities, phonological abilities, articulatory planning, a matured syntactic system, and well-developed semantics. Difficulties in pinpointing morphological development complicate the use of child data to inform linguistic theory. However, this should not prevent us from confronting morphological theory with child language data, as I hope to show in the remainder of this chapter.3

2

But see Blom () where it is argued that at least part of the incorrect bare verbs in early child English are either due to morphological or phonological development; Blom and Wijnen () make a similar claim for Dutch child language based on longitudinal data. 3 Valuable research has also been done in the field of neural network modeling (Rumelhart and McClelland ), where language development is simulated. This line of research has played a major role in the Past Tense Debate (Pinker and Ullman ). See Elman et al. () for further information on neural network modeling and connectionism, also in relation to the dual–single route debate. An insightful overview, on both neural network and language acquisition research, is provided by Ambridge and Lieven ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. M  :   

.................................................................................................................................. Probably the most famous debate in the field of morphology is the Past Tense Debate (Pinker and Ullman ). Although the English past tense has been the arena, the question that underlies the debate—are multimorphemic words stored as units or computed from their parts?—is relevant across languages and across morphological phenomena. Important for the purpose of this chapter is that child language data have featured prominently in the Past Tense Debate. In the first part of this section, two opposing theoretical views will be described: single- and dual-route processing (see, for a more elaborate overview of theories on morphological processing, Gagné and Spalding, Chapter  this volume). In the second part of the section, it will be explained why singleversus dual-route processing is relevant to morphological theory. In terms of morphological processes, the focus of this and following sections is on inflection—reflecting the bulk of relevant research. It has been argued that the same mechanisms that underlie inflection also underlie derivation, both from a single mechanism (Seidenberg and Gonnerman ) as well as from a dual mechanism perspective (Alegre and Gordon a; Clahsen, Sonnenstuhl, and Blevins ). However, to my knowledge, there has been far less research conducted on child first language that has addressed the development of derivational morphology in light of the single–dual mechanism debate (see Clark  for an overview of derivational morphology in acquisition, and Mos  for a more recent example addressing the single and dual mechanism debate). A common distinction in morphology is the distinction between regular and irregular forms. Regular forms can be easily described by a rule, while this is much more difficult for irregular forms (though not impossible, see Chomsky and Halle ). Take the English past tense. Regular past tense forms are forms where the verb stem is followed by the suffix ‑ed as in look-looked, dance-danced, or watch-watched, while irregular past tense takes all kinds of forms, for example take-took, have-had, or catch-caught and so on. Irregulars do display many small pockets of subregularity, though (Bybee and Slobin ; CarstairsMcCarthy a). The regular–irregular distinction has led to the idea of two different parallel processing routes: regular forms are generated through grammar, while irregular forms are memorized and stored in our mental dictionary, the lexicon (Pinker and Prince ; Clahsen ; Pinker ). Dual-route processing is thought to work as follows. When a language user wants to use a past tense verb, (s)he first searches the lexicon for appropriate lexical entries. If the targeted form is not found, grammar steps in and leads to application of the proposed default rule. Grammar is seen as the exception module that provides an ‘Elsewhere rule’ (Pinker and Ullman : ). In the case of irregulars, the language user does not need to resort to this rule, unless the system breaks down or is still developing. The contrasting single-route view holds that both regular and irregular forms are memorized in the lexicon (Bybee and Moder ; Rumelhart and McClelland ). All forms in the lexicon form associative networks (called neighborhoods or families).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



The behavior of those forms is predicted from frequency, phonological neighborhoods, and semantic properties, because associations are based on form and/or meaning and become more ingrained as an effect of frequency of use. The single-route model assumes that both regular and irregular forms are subject to these effects and that the regular–irregular distinction is a continuum. Regular inflection typically applies to a large number (= high type frequency) of diverse verbs (= much schematicity). This contrasts with irregulars, which are usually organized in smaller and phonologically more constrained families than regulars (Bybee , ). In terms of morphological theory, Pinker and Ullman () state that the dual mechanism approach is descended from generative, ‘lexicalist’ theories of morphology (see ten Hacken, Chapter  this volume, and Montermini, Chapter  this volume). Early lexicalist theories of morphology assume that there should be as little as possible in the lexicon (Halle ; Bresnan ; Lieber ; Di Sciullo and Williams ; Lieber ). This approach assumes that lexical entries are idiosyncratic, unpredictable, and cannot be captured by any rules. It is furthermore assumed that many morphological phenomena are neither arbitrary lists nor fully systematic and productive. Both assumptions are present in the dual mechanism model where irregulars are memorized (lexical entries), and are part of an associative network that leads to subregularities which account for the absence of complete systematicity and productivity regardless of semantic or phonological properties. Regulars, in contrast, are not stored in full because they can be generated or parsed by productive, combinatorial rules (grammar). The central argument for impoverished entry models is ‘conceptual simplicity’ (see, for a critical discussion of the impoverished entry models, Jackendoff , b). Jackendoff () contrasted the impoverished entry model, where the lexicon lists as little as possible, with full entry models. Full entry models assume a maximally rich lexicon in the sense that all existing and acquired words are listed (Bybee and Moder ; Rumelhart and McClelland ; Bybee ; Booij a; Masini and Audring, Chapter  this volume; Jackendoff and Audring, Chapter  this volume), similar to the assumption of the single-route model of processing morphologically complex forms. Full entry models attribute a large role to the lexicon and do not assume a fundamental distinction between lexicon and grammar: complex and predictable regular words are stored in full, in addition to idiosyncratic words and root forms. Psycholinguistic arguments seem to be in favor of full entry models (Baayen ). Note that many such models (e.g. Booij a) also have a generative component for new forms, that is, schemas with variables (that replace the traditional rules). So anything can be stored, but not everything needs to be stored. In sum, a basic distinction is made between two views of processing morphologically complex, inflected forms. Dual-route processing is based on the theoretical assumption that lexical entries are impoverished. Inflected forms are generated with rules that are not part of the lexicon, but part of grammar, and lexicon and grammar are viewed as fundamentally different processing mechanisms. Single-route processing departs from the full lexical entry assumption and abandons the fundamental split between lexicon and grammar. It is relevant to note that more recent single-route models embrace a parallel dual processing route with simultaneous lookup and computation (see Gagné and Spalding, Chapter  this volume). However, to our knowledge there are no observations from first language acquisition that shed light on these more subtle theoretical distinctions.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

. O       E  

.................................................................................................................................. The question that now arises is: Do child data favor a single or a dual mechanism approach, and hence a model of language that prioritizes lookup or computation? A first set of findings has to do with gradualness of the development. According to single mechanism models, different acquisition levels may be found for different words or different allomorphs, because acquisition is largely determined by frequency distributions in the input. A dual-route model would predict a (much) less gradual development and, moreover, limited frequency effects. Namely, after a short learning period the rule is acquired and can be applied to any verb regardless of its lexical properties. Regarding English regular past tense, there is evidence for a gradual development. English children show a different developmental pattern for the three regular past tense allomorphs, for instance. Berko’s () famous Wug experiment revealed that English children more often omit the regular past tense allomorph when it has the form /Id/ rather than /t/ or /d/. The use of the three allomorphs is modulated by phonological factors: /Id/ is used after alveolar stops, /t/ after voiceless phonemes, while /d/ attaches to all other verbs. Comparable developmental asymmetries are found for the three English noun plural allomorphs /Iz/, /s/, and /z/. This variation between allomorphs points to a gradual development and one of the factors at play could be input frequency.4 For instance, the ‘delay’ of /Id/ and /Iz/ could be an effect of the low frequency—both in terms of types and tokens—of /Id/ and /Iz/ compared to /t/, /d/, /s/, and /z/, because /Id/ and /Iz/ attach to a very restricted set of verbs and nouns and do not appear as often in the input the learner receives. However, there could be another reason why children have more difficulties with these two allomorphs compared to the other allomorphs. Namely, using /Id/ and /Iz/ requires children to apply the extra operation of vowel epenthesis (Berko ). This effect of phonological complexity would be compatible with both a single and dual mechanism approach. Other research reports effects of the word or token frequency on regular tense inflection (Oetting and Horohov ; Van der Lely and Ullman ; but see Song, Sundara, and Demuth  who did not find token frequency effects for third person singular ‑s). That is, children perform better with those inflected words that they hear often in their input. However, although the dual-route hypothesis assumes that regularly inflected forms are generated on the fly, token frequency effects are not necessarily incompatible with this hypothesis, in particular when they concern highly frequent forms and/or are found in an early stage of development. Namely, within the dual-route model it has been proposed that highly frequent regular forms are stored (Alegre and Gordon b). Furthermore, Pinker () suggested that children’s early paradigms are stored, until a child has had sufficient exposure to contrasting inflectional forms to decompose the inflected form into stem and

4

Note that type frequency is just one of many input related factors that could predict developmental patterns of grammatical morphemes. In this respect Köpcke’s () work is relevant, where different measures (salience, type frequency, cue validity, iconicity) are included in the notion cue strength, which he uses to account for patterns in the acquisition of English and German noun plurals.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



affix (Booij a). Such early storage effects typically apply to frequent forms, which are picked up early by children. This is compatible with findings in the study conducted by Van der Lely and Ullman () who found that token frequency only affects regular past tense use in the youngest age group in their study. However, given the vastness of the human memory, one could raise the question of why a child who has discovered the rule would throw away the stored information on individual complex words (Jackendoff ; Booij b; Masini and Audring, Chapter  this volume). While token frequency effects do not necessarily rule out the dual-route hypothesis, it is also not the case that the absence of any frequency effects necessarily penalizes the singleroute hypothesis. Frequency measures differ in the level of granularity and it is possible that at younger ages different frequency measures are relevant than at older ages (Song, Sundara, and Demuth ; Lieven ). For instance, in my own research, effects of token frequency on tense marking morphology turned out to be relevant in child second language learners of English, who started to learn English at the age of . on average (Blom, Paradis, and Sorenson Duncan ; Blom and Paradis ). Age may affect the availability of cognitive resources, which, in turn, could influence frequency effects. Older learners will have more information in their long-term memory than younger learners; more specifically, a larger vocabulary could have an impact on how children chunk the input and this may co-determine which pieces of input information will be retained in working memory. Working memory capacity itself, and related functions such as attention, may also be subject to developmental changes. As a consequence, different kinds of frequency information could have an impact on long-term memory at different ages. In this respect, it is relevant to mention that apart from token frequency and possibly also type frequency (indexed by the allomorph effect discussed at the beginning of this section), studies have reported effects of phonotactic probability on children’s omission of regular past tense inflection. Phonotactic probability is a frequency measure below the word level, and thus more fine-grained. Effects of phonotactic probability will be discussed below in §., because such effects are particularly prominent in children with atypical language development (Marshall and Van der Lely ; Leonard, Davis, and Deevy ).5

. O      E  

.................................................................................................................................. Overregularization is the incorrect use of a regular inflectional affix, such as the past tense suffix, for example goed instead of went. As such, overregularization exemplifies a commission error. Overregularization demarcates the acquisition of regular inflection as a productive rule and has played an important role in explanations of the famous U-shaped 5

Phonotactic probability is the likelihood of phoneme X given phoneme Y; because this measure will be highly dependent on phonological constraints, effects of frequency and phonological development might be difficult to tease apart. For instance, the final cluster /kt/ as in looked probably has low phonotactic probability in English but the cluster is also marked in terms of the Sonority Sequencing Principle.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

development of the production of English past tense forms (cf. Berko ; Brown ; Rumelhart and McClelland, ; Marcus et al. ). Over the years, there turned out not to be one U-shaped development; instead, each verb has its own learning curve with verbs showing higher or lower overregularization rates (Marcus et al. ; Maslen et al. ). Roughly the trajectory is: () bare verb in the present tense (come); () first attempts to mark past tense with high-frequency irregulars (came); () period of overgeneralization where incorrect (comed) and correct forms (played) coexist and errors in the opposite direction (*pack instead of picked) are rare;6 () phasing out of overregularization errors. This developmental path has been explained in terms of two developing mechanisms: rote memory and rule-extraction (Brown ). Within the dual-route model these two mechanisms are instantiated by the lexicon versus the grammar, while within the single-route model these two mechanisms are part of the lexicon. Overregularization in the dual-route model is characterized as the “occasional breakdown of the system built to prevent the error” (Pinker : ). Earlier research on past tense (Marcus et al. ) and noun plurals (Marcus ) showing low error rates supports this idea. For instance, Marcus et al. found an error rate of  percent; the low percentage of errors might be indicative of performance factors influencing the child’s behavior rather than grammatical incompetence. More recent findings from a study by Maratsos () suggest that the overall rate statistics presented in Marcus et al. () misrepresent the situation. It turns out that error rates vary considerably with age and the low overall error rate is due in large part to the fact that data in the study conducted by Marcus et al. are collapsed over age. Also, the overall rates are dominated by a few high-frequency verbs; analyses that correct for this bias reveal higher error rates and a longer recovery period (Maratsos ). Maratsos’ conclusion is supported by the study of Maslen et al. () based on dense longitudinal data. These studies furthermore indicate fairly high overregularization rates for individual verbs in spite of frequent exposure to the correct form. The gradual development observed in children’s overregularization errors may be more in line with a single-route model where the U-shaped development reflects the competition between irregulars and regulars and their relative lexical strength. Lexical strength is the amount of processing effort it takes to retrieve a (verb) form; forms with strong lexical representation (e.g. frequent forms) require little processing effort, while forms with weak lexical representations (e.g. infrequent forms) require much processing effort. A highly frequent irregular verb with great lexical strength will be more resistant to the pressure of the regular past tense form compared to a lower-frequency irregular verb, but over time the strength of lower-frequency irregulars will increase, and so will their ability to resist. Not only have the frequency and recovery of overregularization been debated, but also the onset of overregularization. Proponents of a dual-route view have argued that the onset of overregularization is not related to changes in parental input such as an increase of regularly inflected forms (Marcus et al. ; Marcus ). Single-route proponents, on the other hand, have pointed to relationships between accumulated information in a child’s lexicon and the onset of overregularization (Plunkett and Marchman ; Marchman and Bates ; Dixon and Marchman ). Lexicon–grammar correlations suggest that 6 Note that in period (3), some children also produce forms such as camed, which suggests that the form came itself is picked up as a stem (Clark ). This, in turn, may raise questions regarding the status of children’s first attempts to mark past tense with came.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



lexical and grammatical development are continuous and influence each other, in line with the single-route hypothesis and theories of morphology that do not assume a fundamental split between lexicon and grammar (Bybee ; Booij a).

. W    ?

.................................................................................................................................. Clahsen () points out that the assumed default forms for English past tense and noun plurals are also the most frequent forms (= majority default system) and that, in order to show that default rules are independent of frequency, it is important to investigate languages other than English to inform the single–dual mechanism debate. The relevant test case would be a minority-default system. Namely, in a dual mechanism approach, rulelike behavior is not contingent on the frequency patterns, but a default can be defined even in terms of the least frequent pattern (Clahsen ). Conversely, a connectionist network was predicted to be unable to simulate people’s regularization of novel forms in a minority-default system. In this section we discuss German and Dutch plurals, as well as Polish case and gender inflection. The German noun plural system has six major plural allomorphs (Köpcke ). As children (like adults) prefer ‑s for unusual and noncanonical words (e.g. proper names, acronyms), it has been concluded that ‑s is the default even though it is the least frequent plural marker (Marcus et al. ; Clahsen et al. ; Clahsen ). However, Hahn and Nakisa () point out that ‑s does not equally apply to unusual and noncanonical words; for instance, for foreign borrowings the outcomes were less clear than for native words, which would be unexpected if it were the default. Köpcke () highlights that German two-year-olds seem to overgeneralize zero marking and ‑en more often than ‑s (for similar observations, see Gawlitzek-Maiwald ), which is also not in harmony with the idea that ‑s is the default. Other challenges come from systems where there is no clear default. For instance, Dutch has two plural suffixes and is rather ambivalent regarding which of the two would be the default (Pinker ; Keuleers et al. ). Van Wijk () tested Dutch first language acquirers with a task modeled after the Wug test. She found different overgeneralization patterns that partly followed phonological patterns in Dutch, and no evidence for a clear unique default. Dąbrowska () investigated the acquisition of Polish genitive inflection and concludes that the three different markers (masculine, feminine, neuter) are overgeneralized by children at similar rates. Furthermore, Dąbrowska and Szczerbínski () found support for a frequency-driven error pattern in the acquisition of Polish noun inflection. They tested children with novel nouns in genitive, dative, and accusative case and found that in the dative case, children used masculine instead of feminine, while in the accusative case, they used masculine instead of neuter. The directionality of the errors reflected overuse of high-frequency forms where lower-frequency forms are required. In this section some findings from languages other than English are reviewed. These studies indicate that investigations into minority-default systems challenge the dual mechanism approach. In addition, error patterns in languages with inflectional systems that are richer than the English system suggest that errors are determined by lexical properties, which is difficult to reconcile with the dual mechanism assumption of a unique default.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 

Only a limited number of languages have been discussed. For further information on crosslinguistic patterns in morphological development, see Slobin (, , ) and Levy (). For acquisition patterns in verb inflection and noun inflection across a wide range of different languages (Croatian, Dutch, Estonian, Finnish, French, German, Greek, Italian, Lithuanian, Palestinian Arabic, Russian, Spanish, Turkish, Yucateco Maya), the volumes by Bittner et al. () and Stephany and Voeikova () are relevant. Information on the acquisition of inflection in Italian and its theoretical consequences can be found in the study by Pizzuto and Caselli (). MacWhinney () discusses the acquisition of Hungarian morphology in light of the distinction between rote-memorization versus rule-extraction. Nakipoglu and Ketrez () explicitly address the single versus dual mechanism debate in their study on Turkish inflection.

. A  

.................................................................................................................................. There is a group of children—about  to  percent of the population—whose language development does not fall within the normal range and there is no clearly detectable cause for this. These children have no hearing impairment, no obvious neurological deficit or developmental delays, and there is no diagnosis of autism that could explain their language problems. These children are often labeled as having Developmental Language Disorder, previously referred to as Specific Language Impairment (SLI, see Bishop  for a discussion of the terminology). One area in language that seems particularly affected by the impairment is the production and comprehension of morphology, including regular inflection (Leonard ). A handful of studies have investigated SLI in light of the single–dual-route debate. Clahsen and Almazán (, ) compared children with SLI and children with Williams syndrome (WS) on their use of English past tense and noun plurals. In contrast to children with SLI, children with WS are generally verbally well developed, but their nonverbal IQ scores are below average. It was found that the WS group scored high on English regular past tense and noun plural inflection and low on irregulars, while the SLI group showed the exact opposite profile. Clahsen and Almazán interpreted this double dissociation as support for the dual-route assumption that grammar and lexicon are two separate systems, because this allows grammar to be selectively spared in WS and lexical knowledge to be selectively spared in SLI.7 A similar conclusion is reached in two other studies on the basis of frequency effects (Van der Lely and Ullman ; Marshall and Van der Lely ). Van der Lely and Ullman found that the SLI group in their study showed token frequency effects for regular past tense forms, in contrast to typically developing age controls. Marshall and Van der Lely report the same outcome for phonotactic probability and, additionally, observe that in children with WS regular past tense inflection is not influenced by phonotactic probability. Both studies conclude that in children with SLI the exception module, grammar, is deficient and, therefore, children with SLI show a single-route profile whereas normally developing children use dual routes. 7 For both SLI and WS the existence of selective language impairments have been challenged. For an overview of SLI, see Leonard (). For further discussion on WS, see Thomas and Karmiloff-Smith ().

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



The outcomes of studies on children with SLI seem to present compelling arguments in favor of the dual-route hypothesis. However, it is relevant to note that studies have challenged the existence of selective language impairments for both SLI (for an overview, see Leonard ) and WS (Thomas and Karmiloff-Smith ). Like Marshall and Van der Lely, Leonard, Davis, and Deevy () found that phonotactic probability had a different effect on the use of regular past tense in children with SLI and children with typical development. They conclude that children with SLI have difficulties generalizing regular past tense to more atypical instances. This interpretation would also be compatible with a single-route approach.

. C 

.................................................................................................................................. In , Pinker and Ullman speculated on the future of the Past Tense Debate. The fundamental question addressed in this debate is whether morphologically complex forms are processed using one or two mechanism(s). In other words, is regular, predictable morphology a matter of lexicon or grammar? This question is at the heart of morphological theory: does the lexicon contain as little as possible, with regular predictable words being generated by grammar, or do we assume a maximally rich lexicon? More than ten years later, the Past Tense Debate is still unresolved. In addition, the traditionally assumed distinction between lexicon and grammar has been challenged. The child language data in this chapter provide by no means definite answers to the above questions. There are indications that the omission of regular tense inflection is predicted by input frequency, but the interpretation of these findings is still open to different explanations. In order to understand why some studies find frequency effects and other studies do not, it is important to compare different frequency measures across ages, developmental phases, and language learning populations. Findings in studies that looked more closely at the developmental track of overregularization and studies that investigated languages other than English seem to favor the single-route hypothesis. However, more research is needed, in particular in highly inflecting languages. On the other hand, it is not entirely clear how single-route models would deal with the language profiles of children with language impairment. Taken together, avenues for future research concern, first, the influence of target language properties on developmental patterns and, second, individual differences between children. Having come to the end of this chapter, I would like to return to the issue with which I started. Language acquisition data can inform morphological theories, but not falsify them because of the multi-faceted character of both language development and (inflectional) morphology. However, morphological theories with a broader empirical coverage may be preferred over theories with a smaller coverage. From this perspective, morphological theories should be able to explain not only cross-linguistic variation and individual variation in language acquisition data, but also interactions between morphology, phonology, and syntax as well as the influence of processing demands and cognitive growth. Theoretical progress can be achieved by the successful integration of all these aspects into morphological theory (Jackendoff b). This, in turn, will require close collaboration between theoretical linguists, experimental psychologists, and language acquisition researchers.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

      ......................................................................................................................

    

A a second language in adulthood can be a challenging undertaking. But the rewards are often great. For the adult learner, it can open up entirely new domains of experience and opportunity. Our goal in this chapter is to highlight the fact that the rewards of conducting research on second language acquisition and with second language learners are also great. The study of second language learning can lead to insights concerning the key properties of morphological knowledge and ability and how both of these may change over time. In this chapter, we focus on the acquisition of representations for inflection, derivational morphemes, and compounds, as well as on structural and representational issues in the bilingual lexicon. We will present different views of second language variability and accuracy as well as views on how to best understand the path of development among second language acquirers. We will also discuss methods and techniques used to gain understanding of morphological acquisition and processing. We use the term second language acquisition to refer (primarily) to the acquisition of additional languages by adults and, for space reasons, do not address variables such as whether the learners have been instructed in classrooms or are uninstructed (see Lightbown and Spada  for a discussion of this issue). We hope to show how linguistic theory and experimentation can be used to advance the understanding of morphological representation and processing. Furthermore, we probe how this understanding may enable fundamental insights into the dynamic nature of lexical representation and processing in the mind. Three main themes—representations, interfaces, processing—run through all the sections. We have chosen not to separate these strands by section to highlight the interwoven nature of L morphology.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



. I : W    ?

.................................................................................................................................. From the outset of research on L morphology, it has seemed that the most obvious empirical fact that needed to be documented and explained was that, anecdotally at least, second language learners often omitted inflectional morphemes in their L production. It has also been observed that some morphemes are more prone to omission than others. From this simple starting point, we can begin to probe competing accounts of what might underlie these phenomena. In this section, we survey some of the differing theoretical accounts (couched within a generative framework) which have been proposed to account for the differential difficulty of morphemes. We also survey some of the details which probe the empirical accuracy of the claim that surface omission means lack of acquisition, or at least reveal the complexity underlying omission in production. Morphology is no different from other domains in that the following points hold: . Not all forms are acquired at the same time. It would be trivial to note that if we made a list of all the morphemes (or even all the inflectional morphemes) of a particular language, there is a timeline to their development in an individual. The question of interest is what explains this developmental path. Is it something specifically linguistic (e.g. features, syntactic projections, phonological structures, etc.) or something associated with more general cognitive factors (e.g. frequency, acoustic salience, etc.)? . Plato’s problem. As Chomsky () pointed out, we acquire knowledge for which the environmental evidence may be severely impoverished in that it is insufficient to account for the final state of the grammar achieved. Learners may end up representing abstract linguistic elements even though there are no direct acoustic cues that signal the need for these elements. In phonology, for example, it has been argued that while both French and English have a phonemic /p,b/ contrast, French represents the phonological feature [voice] while English represents the feature [spread glottis] (Iverson and Salmons ). Somehow L children acquiring those languages are exposed to the relevant input data and acquire the target-like representation. L learners would face the same task, and the cue to this feature may well be subtle. The same would be true of the morphological acquisition problem facing someone who has L [aspect]1 and needs to acquire L [tense]. The relevant cues in the input may be subtle (see Slabakova ). We will explore this issue later. . Orwell’s problem. Chomsky () additionally notes that we are also resistant to the acquisition of knowledge which is quite frequent in the environment. For example, there may be lots of [h] sounds in the input L learners are exposed to and yet they

1 By convention, we will insert both phonological and morphosyntactic features in square brackets. We are presenting features such as [aspect] and [tense] at an abstract level as differing theories represent these features differently; the details are not relevant to the argument here.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



     may be resilient to the acquisition of /h/ (Mah ). There may be many tokens of plural in the input, and yet plural marking is frequently omitted.

So, let us turn to the objects of investigation and the explanations which underlie them.

.. Early studies of morpheme acquisition In the s there was much interest in the order of acquisition of morphemes in learners. Much attention was paid to the question of whether adult L learners acquired grammatical morphemes (Dulay and Burt ) in the same order as child L learners (Brown ). These studies looked at the acquisition of English morphemes such as: plural, past tense, progressive aspect, perfective aspect, rd person agreement, possessive, irregular verb morphology, copula verbs, articles, etc. There were many reports which focused on the commonalities of the orders (Bailey, Madden, and Krashen ; Larsen-Freeman ; Krashen ; Andersen ). Even if such commonalities were accepted (Larsen-Freeman and Long ), the question of explanation for the common order, given the diversity of morphemes in question—free, bound, nominal, and verbal—needed to be tackled. The locus of the explanation here was viewed to be an autonomous morphological domain. Furthermore, input frequency, according to Brown, was not correlated with order of acquisition. Goldschneider and DeKeyser () provide a valuable meta-analysis of many of these studies; see also Hulstijn, Ellis, and Eskildsen ().

.. Generative approaches to morpheme acquisition In the s, there was a flurry of inquiry which sought to explain the absence of certain inflectional morphemes and the developmental path taken by L learners in terms of their representations of functional categories in the syntax. Zobl and Liceras () were one of the first to propose a syntactic explanation of the observed orders of acquisition. One of their goals was to try to tease apart representational effects from task effects in order to determine if the omission of morphology was a result of competence or performance. Drawing on the influential work of Meisel, Clahsen, and Pienemann (), who noted common orders of acquisition of German morphology in non-native speakers, Vainikka and Young-Scholten () proposed a maturational account for second language acquisition based on the proposals for first language acquisition (seen in Radford ; Guilfoyle and Noonan ). They suggested that L learners would become accurate on Verb Phrase (VP) morphology (like tense) before Inflectional Phrase (IP) morphology (like modal auxiliaries) before Complementizer Phrase (CP) morphology (like WH question complementizers) because the higher representational projections were actually absent from the mental representations in early stages of second language development. This proposal may be represented in the three-stage process such as the one shown in Table .. For Vainikka and Young-Scholten (fleshed out in their more recent Organic Grammar work, ), morphology may be omitted in production by the learner because the necessary syntactic structure has not yet been built by the learner. This approach would be contra Schwartz and Sprouse () and White () who assume Full Transfer (from the L to the L) of, and Full Access to, all syntactic functional categories from the earliest stages of acquisition. It is not our purpose here to outline their

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



Table .. Three stages in the Vainikka and Young-Scholten () maturational account of second language acquisition Stage 1

Stage 2

Stage 3

VP morphology

IP morphology

CP morphology

VP

VP & IP

VP & IP & CP

proposals in detail, but we would like to draw attention to the differing accounts of variability in production. For models which assume an impoverished representation, the burden is to explain why the correct forms are produced at all. For models which assume complete representations, the burden is to explain why the correct forms are not always produced.

... Representational deficit approaches Hawkins and Chan () propose a representational account for the lack of certain L morphemes in the speech of adult second language acquirers. The logic of their position goes something like this: if a representational feature (e.g. [past]) is lacking from your L then you will be unable to acquire this feature in your L. This certainly seems to account for the omission of, say, tense morphology by Chinese learners of English under the assumption that Chinese marks [aspect] rather than [tense]. Hawkins and Chan () claim that tense markers would be omitted from verbs in adult learners of certain languages because the [tense] node, which is absent from the L representational system, cannot be acquired by an adult in the L. Thus, Hawkins and Chan assume that the acquisition problem is a maturational problem. Hawkins and Chan’s () proposal was revised by Tsimpli and Dimitrakopoulou () to account for the presence of some morphemes in the grammars of second language learners (those relying on interpretable features) paired with the absence of others (those relying on uninterpretable features). Chomsky (), in the Minimalist Program, makes a distinction between features which are semantically interpretable and those which are not (and, hence, have only a syntactic function). Interpretable features would be such things as [finite], [plural], and [past]. Uninterpretable features would be such things as [rd person] and other agreement features.2 This position would be supported if learners were more accurate on, or more likely to acquire, features like [plural] compared with features like [rd person ‑s]. In both Hawkins and Chan () and Tsimpli and Dimitrakopoulou (), there is a claim that the malfunction in the L grammar is in a core linguistic module (be it syntactic or lexical).

... The locus of the deficit revisited There are others (Lardiere , ; Prévost and White ), though, who suggest that the malfunctioning is in a more peripheral component of the language system. The representational feature may be nativelike (or targetlike) but the mapping of that feature 2

See Hornstein, Nunes, and Grohmann () for more discussion of interpretable and uninterpretable features.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

onto a morphological form is non-nativelike. Lardiere provides data from a subject whose L lacks [tense] but who, she argues, demonstrates an awareness of the tense distinction which suggests that she has, in fact, acquired the representation in her L even though the L is lacking such a feature. There are two arguments we would like to refer to briefly which indicate that the underlying feature has, in fact, been acquired: () nominative case, and () regular versus irregular forms. In many syntactic theories there is a connection made between case marking and tense marking. Consider the sentence ‘I believe him to be a liar.’ For many syntacticians, the subject of the non-finite (or tenseless) clause ‘to be a liar’ receives accusative case (i.e. him). For a traditional grammarian, it is surprising to have an accusative subject (subjects are usually nominative). But note that ‘him’ cannot be the object of ‘believe’, as you do not, in fact believe him at all; you believe him to be a liar. A consequence of having a finite/non-finite distinction in the grammar then, is to accurately mark nominative versus accusative case. Lardiere ( and elsewhere) showed that her subject who often omitted past tense morphology had certainly acquired appropriate case marking in infinitival clauses. She, therefore, argues that accurate case could not have resulted without the proper Tense feature being in place. The second argument is that if the past tense feature is absent then we should expect deficient production in both regular and irregular verbal forms. Lardiere notes that while the regular forms are often missing their inflectional morphology, irregular forms may be produced correctly. These data make it problematic to say that the past tense morpheme is absent completely. While some would make a distinction between the storage of irregulars as opposed to the computation of regular forms (e.g. Ullmann ; see also Blom, Chapter  this volume), for Lardiere both types of lexical forms are inserted under the Tense node of a syntactic tree given that Tense is an obligatory element in all sentences. See also Fruchter, Stockall, and Marantz () for arguments against the storage of irregulars.

.. Variation at the interfaces Franceschina () categorizes this kind of approach as one which localizes the problem at the interface of morphology and syntax. She presents some data from the second language acquisition of Spanish gender by an English speaker. Spanish has syntactic gender agreement (i.e. an uninterpretable gender feature which appears on determiners and adjectives, in addition to an interpretable gender feature which appears on nouns). She reports on the production data of a near-native subject who started learning Spanish at the age of  but who lived in a Spanish immersion environment for a total of  years ( of which were uninterrupted). Franceschina’s subject (Martin) did not make any errors on the gender assignation of nouns, very few errors on number agreement, but significant errors on gender agreement on the determiners and adjectives. She argues that this is not just a morphological problem, then, as there is no reason to expect certain grammatical categories (e.g. nouns versus adjectives) to be subject to more errors than others. She argues that these results are more in line with a syntactic feature deficit account when it comes to agreement. Sorace and Filiaci () suggest that the locus of explanation for variability is in the interface of syntax and other cognitive domains. The variability in production would arise from what they refer to as ‘incomplete acquisition’ (see also Montrul ). The term, while

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



somewhat controversial, is invoked to capture non-nativelike performance (high error rates, optionality, indeterminate knowledge). Features which were entirely syntactic (or at the syntax–semantics interface) would be produced more accurately than features which marked information found at the syntax–pragmatics interface. Sorace and Filiaci () report that second language learners of Italian were nativelike in their interpretation of null subject pronouns but were non-nativelike in their interpretation of Italian overt subject pronouns. The null pronouns are governed by strictly syntactic principles, while the use of overt pronouns is governed by contextual or pragmatic factors. There is also a rich vein of work summarized by Clahsen et al. () which argues that variability in the production of affixes (and indeed suggesting a fundamental difference between first and second language acquisition) can be ascribed to processing differences which may be connected to the dual mechanism approach of Ullman () where adult second language learners rely more heavily on declarative rather than procedural processing. Such processing accounts may, of course, also refer to differing representational levels as the source of variation (Morgan-Short et al. ).

.. Language tags on morphemes In this section, we introduce the notion of language tagging to look at questions of developmental path and variability in another way. Following Grosjean’s () observation that multilingual brains (i.e. brains with representations of more than one language) are more common on the planet than monolingual ones, we can posit that language tags must be built in as a design property of lexical entries. What is a language tag? It has long been noted (Giegerich ) that monolinguals may have tags as part of a lexical entry to mark such things as different lexical classes (such as Latinate vocabulary in English) or lexical strata (Chinese vocabulary in Japanese; Ito and Mester ). By extension, we suggest that all lexical roots and affixes in a given language would, in fact, be tagged (as English, or German, or Danish) even in monolingual speakers. Green () noted how we can suppress the language items which, while activated, do not belong to the language of the conversation or context. Which items get suppressed? The items of a particular language tag. So, we assume that the lexical entries in the bilingual brain are tagged for the appropriate language. It is easy to see how this can function in the speech of proficient bilinguals; we speak French in a French context, and Swedish in a Swedish context. But, it is less obvious to think of how these tags might be acquired. We would like to suggest that there are parallels in morphology to what we have seen in phonology. We are familiar with people having to learn new sounds which are not found in their L. You might have to learn a [θ] or a [ɹ] or an [y]. In the initial state of your second language learning (and perhaps even in advanced proficiency) you may make either perceptual or production errors. You might perceive [ü] as [u]—if you are an English speaker—or as [i]—if you are a Portuguese speaker (Rochet ). You might produce a [θ] as a [t] (if you are a Quebec French speaker learning English), and you might produce an ejective [p'] as a [p] (if you are a Spanish speaker learning Yucatec Mayan). The point is that you have to learn something new in the L which is absent from your L, and there may be false equivalencies assigned between the two languages. This process of transfer results, in this case, in a foreign accent. Learners may, for example, equate the realization of their English /p/ with the realization of

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

the French /p/. In both languages, they hear a [p] and a [b] but have to acquire the feature [voice] in one case and the feature [spread glottis] in the other (Iverson and Salmons ). Acquisition of L morphology is analogous. Subjects have to learn some new morphemes which are not found in their L, and which may have false equivalencies assigned along the developmental path. Sometimes this kind of morphological transfer may be subtle. Let us return to the examples from Lardiere and Hawkins. They both looked at the question of how speakers of an [aspect] language like Chinese could acquire tense morphology in a language like English. We would argue that the learner would have aspectual morphemes (e.g. perfect versus imperfect) in their L stored with Chinese language tags but would have to acquire representations of L tense morphemes (e.g. past versus non-past) with English language tags. False equivalencies may be set up as the learner perhaps initially re-lexifies and stores an English phonetic string (say past tense) to link to a Chinese morphological feature (for perfect aspect). Thus, it is important to note that acquiring L morphology is a learning problem and not merely a noticing problem (Schmidt ). The phonetic string can be noticed in the input stream but the underlying morphological feature (e.g. [tense]) must be learned and represented. Perhaps initially, learners would equate Chinese [aspect] marking and English [tense] marking. Indeed translating sentences would often suggest equivalency (tones are not marked in this example): () a) Mandarin: Ta chi fan le. He/she eat meal PERF ‘He completed the action of eating.’ b) English: He ate. But the Mandarin ‘le’ does not indicate past tense on the verb (i.e. ‘eat’ versus ‘ate’), rather it indicates a completed action (more like ‘eats’ versus ‘has eaten’). As in English, the completed action could be in the past, present, or future (‘had eaten’, ‘has eaten’, ‘will have eaten’). The data to tell a learner that English is a language which marks tense (with features like [past]) while Chinese is a language which marks aspect instead (with features like [perfective]) may be subtle. Our point is that whatever evidence there is, learning must take place in order for the correct feature to be represented and associated with the appropriate language tag. Lardiere’s analysis of these data would be viewed as supporting the claim that bilinguals acquire new morphemes which are language-tagged appropriately even though their production is sometimes non-nativelike. In our view, this fits with the Homogeneity Hypothesis (Libben ) in that while tasks affect accuracy, and lexical entries must be tagged appropriately, the essential architecture and processing mechanisms of monolinguals and bilinguals are the same (Libben and Goral ). The Homogeneity Hypothesis would also lead us to expect that there is no encapsulation of morphology or of a language-specific lexicon. The alleged monotonic developmental path described by the order of acquisition studies (in §..) are actually the result of the interaction of phonology, morphology, syntax, and semantics, and subject to task effects (Lin ). To summarize, the knowledge and behaviour of bilinguals and second language learners in the realm of morphology is just like any other component of the grammar we may choose to look at.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



.. The phonological interface Following the argument, then, that an underlying grammatical feature may be present in the representation but not mapped onto the correct target language morpheme, we are led directly to investigate the role phonology plays in accounting for the omission of certain morphemes in production. The Prosodic Transfer Hypothesis (PTH) assumes (Goad, White, and Steele ) that the grammatical representations (in this case, the focus is on the phonology) which transfer to the initial stage of the second language is the full grammar of the L (see Schwartz and Sprouse  for more detail on the Full Transfer/Full Access model). A model of prosodic phonology (based on Nespor and Vogel ) would assume the levels in the prosodic hierarchy shown in Figure .. The details of each level of representation are not critical to our arguments here. Suffice it to say that there are phonological patterns cross-linguistically (e.g. where we find phenomena such as aspiration, or devoicing, etc.) that can be explained with reference to these phonological domains. So, English may aspirate voiceless stops at the beginning of a foot; German may devoice obstruents at the end of a syllable; French may place stress at the end of a phonological phrase. Crucial to our discussion, though, is the notion that these constituent phrasal structures (e.g. foot, syllable, mora) are what transfers to the L grammar and are what will explain the production data (particularly the omission of certain morphemes). Goad and White () and Goad, White, and Steele () propose that if a particular morpheme is not produced (though some recent work by Lieberman () suggests that the PTH may, in fact, apply to perceptual data as well), maybe it is because it is composed of sound sequences which are difficult to pronounce, and not because the morpheme itself has not been acquired. Learners would omit morphemes when the L prosodic structure (which is transferred into the L) does

Intonational Phrase | Phonological Phrase | Clitic Group | Phonological Word | Foot | Syllable | Mora | Segment

 .. Levels in the prosodic hierarchy

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

not license the production of the spell-out of the morpheme. English inflectional morphology is assumed to adjoin to the Prosodic Word (PWd), as shown below for the word helped: ()

[PWd[PWd help] t]

In other words, we begin with the Prosodic Word help, add to it a past tense affix [t], and derive a complex Prosodic Word helped. Mandarin, by contrast, adjoins inflectional morphology (like aspect) within the Prosodic Word at the Foot level as we can see in the structure for the perfective form of the verb buy: ()

[PWd[Ft [σmai] [σlɘ]]]

Once again, the details of the argument of how we can tell that the perfective affix is attached to a Foot and not to a Prosodic Word is not critical. We ask the reader to accept the differing structural analyses of the two languages. Note, then, that the Mandarin speakers have not only to acquire the English [tense] feature but also to learn to adjoin morphology at the Prosodic Word level. Being unable to prosodify the structure, the surface inflection will be missing for phonological reasons. This is exactly parallel to a perhaps more familiar data set. An L learner may omit consonants if the L grammar does not allow coda consonants. In a word like kick, the [k] is in the syllable coda. When adding a past tense morpheme, we derive a complex coda sequence [kt]. So, if the learner does not produce the [t] in a past tense word, it may be for phonological not morphological reasons. Let us explore this in more detail.

... Right-edge clusters The morphology–phonology interface, then, is important to understanding the nature of morphology in developing systems. In English, the production of inflectional morphology is confounded with the presence of right-edge consonant clusters to mark this morphology. The basic question here is whether what is interpreted as the lack of, say, past tense morphology is, in fact, the inability to produce consonant clusters. So, if someone says, ‘Yesterday, I [wɑk] to the store’, are they producing a non-past verb or a [+past] marked verb with the final consonant deleted because of L transfer? The answer to this question speaks to the explanation of the phenomenon. Abrahamsson () looked at a number of aspects of the acquisition of Swedish consonant clusters by speakers of Chinese. As we will see, the production of these clusters is influenced by the morphology of the language. Abrahamsson draws on a fact noted at least since Weinberger () which is that even if clusters are repaired to be produced in a non-nativelike way, they can be repaired in more than one way. Broadly speaking, we see epenthesis strategies and deletion strategies. A deletion strategy would be to produce a word like ‘went’ as [wε] while an epenthesis strategy would be to produce the word ‘went’ as [wεntə]. There are a number of factors that influence whether an L subject will prefer an epenthesis or deletion strategy. One is L. Italian learners of English may epenthesize at the end of a closed syllable in order to produce a CVCV pattern. So, ‘bed’ might be produced as [bεdə]. Chinese, on the other hand, tends to maintain its preference for a CV syllable by deleting coda consonants so that ‘wet’ would be pronounced as [wε] or [wεʔ]. But

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



Abrahamsson notes that even the Chinese L subjects produce more epenthetic forms as their L proficiency rises. There may well be a communicative reason for this as the epenthetic forms seem to allow the listener to recover the intended lexical item more easily as can be seen from the fact that ‘wet’, ‘when’, and ‘went’ might all be produced as [wε] which could lead to confusion under a deletion strategy, while epenthesis would more easily allow disambiguation to [wεtə], [wεnə], and [wεntə]. All of this reveals that inflectional morphemes are not simplistically ‘left out’ or ‘put in’ but, as with other aspects of L performance, their production is governed by more complex interactions of factors. Lin () probed an interesting methodological point about the data to which we apply our explanations. So far, we have been asking the somewhat simplistic question which begins ‘if morphology is omitted in production . . . ?’ But, of course, we need to ask under what circumstances the data are collected. What are the production tasks, and do different tasks result in different patterns? Lin gathered production data via four different tasks: word-list reading, sentence reading, structured interview, and free conversation. She also notes the variation in the epenthesizing versus deletion strategies but, in her Chinese L/ English L population, finds that epenthesis is favoured in the more formal tasks with deletion favoured in less-formal tasks. Space prohibits us addressing the details, but the reminder that task effects exist as we use data to infer the nature of the underlying grammar is welcome.

.. Production versus comprehension The question of whether we are looking at production or comprehension data is important for two reasons. The first is that the different accounts would make different empirical predictions. If the representations of the grammars are impoverished (under a Representational Deficit model such as Hawkins and Chan ), then we would expect across the board production and comprehension deficits. If the syntactic representations are not impoverished, we might expect intact comprehension but variable production given other external factors (task complexity etc.). The Prosodic Transfer Hypothesis argues (pace Lieberman ) that omission should occur in production tasks but not comprehension tasks. The second reason is that this brings us to the question of what data we are, in fact, looking at in making claims about developmental sequence: production or comprehension? In many aspects of second language acquisition research, it has been argued that comprehension (or perception) data provide a more direct window onto grammatical knowledge as they are not subject to performance variables that affect a variety of production tasks. For example, as limited-capacity processors, we may be able to produce a form (say past tense) in a context-reduced environment, but if we are trying to insert a low-frequency verb into a passive sentence while making a question out of an embedded phrase, then maybe we will make a mistake on the past tense morphology. Comprehension, however, is much more automatic. As others have noted, we cannot suppress our comprehension. If we have the relevant lexical items, we cannot fail to understand when we hear a word like ‘cat’ or ‘walked’. It may well be the case, then, that the original interest in a so-called order of acquisition may be explained better by more sophisticated production

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

models than by grammatical theories. Therefore, the more recent literature (e.g. Lardiere ; Clahsen et al. ) attempts to synthesize our current knowledge of representation, production, and perception. It is also fair to say that the original list of morphemes being investigated in the so-called Order of Acquisition studies (e.g. Dulay and Burt ) were not grammatically homogeneous (consisting of verbal and nominal affixes, contractions, and free morphemes) and, hence, it is not surprising that we cannot find a single explanation to account for their acquisition, production, and perception.

. F     

.................................................................................................................................. As we have discussed above, the study of inflectional morphology among second language speakers started out as a story of individual morphological forms, their order of acquisition, and their relative difficulty. Subsequent explanations of data patterns have focused not only on the individual morphological forms, but have also sought to draw conclusions regarding what a second language speaker acquires in the way of a system, what knowledge system underlies second language performance, and whether or not that knowledge differs in kind from the knowledge possessed by a native speaker. In our discussion above, we have also noted that the perspectives of language production and comprehension can offer quite different vantage points from which to understand morphological representation and processing in a second language. Furthermore, within each of these domains, different investigative methods have the capacity to reveal very specific aspects of knowledge and performance. Thus, the methodology employed is a key factor. We will return to this issue in the last section of this chapter. To date, there are relatively few studies that have focused on derivational morphology in a second language. In the literature that does exist, there are points of similarity between inflectional morphology and derivational morphology. But, there are also very substantial differences. Chief among these is that, in contrast to research on inflectional morphology, research on derivational morphology has not focused on the order of acquisition of individual morphemes, nor on their relative difficulty in production. The reason for this is very likely that, in contrast to the omission of inflectional morphemes which is often salient in the speech of second language speakers, the ability of second language users with respect to derivational morphological ability is not as easily observed. Indeed, there is some need to reflect on what might constitute the nature of the ability to do morphology. Libben () claimed that morphological ability is the ability possessed by an individual speaker of a language to make use of the meaningful substrings within a word in a manner that enables the processing of new forms while maintaining properties of established words. He claims that the ability can be broken down to the following four abilities: (a) The ability to repeat, comprehend, and produce multimorphemic words with appropriate semantic and syntactic properties. This comprises, for example, the ability to produce and understand the word formal as an adjective, but formality as a noun. It is clear that this ability is variable across a

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



population of first and second language users. For example, to a native speaker of English, a word such as finality may only have the constituents final and ‑ity. A second language user of English whose first language is French, however, may have a somewhat richer mental representation, which would include semantic representations for the root fin in finality, which corresponds to the French word fin, meaning ‘end’. (b) The ability to understand novel multimorphemic constructions. This refers to the ability to interpret a novel construction such as keyprint through access to the meanings of its constituents and to know that, in English as a language in which the final constituent of compound words is the head, keyprint must be a type of print, but printkey must be a type of key. (c) The ability to produce novel multimorphemic constructions. This ability is at the core of an individual’s ability to use morphological knowledge to create new words such as borealize or screenless. Although this might, at first blush, seem not to be particularly important for second language speakers, because they might be inclined to leave the expansion of a language’s vocabulary to native speakers of that language, in fact, it is extremely relevant. The reason for this is that second language speakers need to have the ability to use morphological patterns in order to fill gaps in their own stored vocabulary during real-time communication. (d) The ability to employ morphological patterns within the language in order to organize vocabulary knowledge. This ability is the least overt of the four abilities, but is perhaps the most important. It is most often revealed through psycholinguistic experimentation in which facilitation and interference among multimorphemic words are investigated. Libben () notes that morphological patterns bring systematicity and cross-referencing ability to an individual’s vocabulary system so that formation, formlessness, and uniformity can be linked in the mind despite the diversity of their whole-word meanings. Similarly, the links among words such as flashlight, penlight, searchlight, daylight, and nightlight create associations in a mental network into which new forms such as desklight can be incorporated. An interesting study of morphological ability and the means by which it can be developed was reported in the PhD dissertation of Friedline (). In this study, a variety of tasks were used to capture second language derivational ability. These included a lexical decision task, a word relatedness task, and a word analysis task. The results showed that the native speakers of English outperformed non-native speakers on these tasks. Non-native speakers in the study were adult students ranging in proficiency from low intermediate to advanced and who were enrolled in an intensive English programme at a large public university. These learners had relatively little difficulty identifying derived real words. However, they performed less well on non-existing words such as *arrivable. They had difficulty judging that although thoughtfulness is an acceptable English word, changes in the ordering of the suffixes to form *thoughtnessful results in a string that is not an acceptable word of English. Second language users also showed difficulty in the identification of the constituent morphemes of derived words, particularly when affixation was accompanied by changes to the stem. Thus, for example, because the noun form of profound is profundity (with an associated change in the vowel of the stem), it was less likely to be recognized as profound+ity.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Taken together, the results of Friedline () suggest that of the four skills identified by Libben (), the non-native participants showed relatively stronger performance in skill (a), which in this case was simply the ability to correctly identify and process existing derived words. Friedline’s account for this is in accord with the points raised by Clahsen et al. (), who in turn draw on the proposals of Ullman (, ). The basic claim of Ullman is that language processing can draw on two cognitive systems—the declarative system and the procedural system. The declarative system is the memory system that we would usually associate with the storage of words in the mind. The procedural system is the system that we most naturally associate with grammatical ability, which in this case corresponds to morphological ability. Ullman () claims that there is a maturational progression toward greater reliance on the declarative system. Thus, in most cases, L acquisition will tend to rely more on the declarative system than L acquisition does. Clahsen et al. () note that this leads to the prediction that L morphological processing will rely less on morphological decomposition than L morphological processing does. Thus, it might be said that the results reported by Friedline () result from second language speakers being what we might term ‘less morphological’. The view that second language users rely less on morphological processes in their comprehension of derived words receives partial support from the results reported by Silva and Clahsen (). They used a morphological stem priming paradigm with lexical decision latency as the dependent variable. Participants were native speakers of German, Japanese, and Chinese who spoke English as a second language and were, at the time, students at the University of Essex. Thus, they could be considered to be high-functioning second language students, who routinely, as university students, encounter multimorphemic English words that are novel to them. In the priming paradigm used by Silva and Clahsen (), participants saw a prime word (e.g. kindness) presented for a brief duration ( milliseconds) on a computer screen. This was followed immediately by the presentation of a target word (e.g. kind) for  milliseconds, which the participants had to judge as either a word or nonword. Response times to correctly respond ‘yes’ to the target word were compared to response times for cases in which the target (e.g. kind) was preceded by an identity prime (e.g. kind) or an unrelated prime (e.g. raw). Silva and Clahsen found that, like native speakers of English, second language speakers show response times that are faster for targets that are preceded by multimorphemic words that contain them than to a target preceded by unrelated words. However, unlike native speakers, who show no significant differences between latencies for the morphological condition (e.g. kindness ! kind) and the identity condition (e.g. kind ! kind), the second language users showed slower response times for the morphological condition. Results such as these led Silva and Clahsen to suggest that the procedural/declarative distinction of Ullman () may explain the attenuated priming effects (as well as the absence of priming effects in the study of inflected forms using the same paradigm). It seems to us that the identification of two ways to process derived words is certainly of substantial importance in the explanation of processing of derivational morphology among second language users. Indeed, the history of psycholinguistic research on the representation and processing of such words is very much a history of the interplay between these two approaches. This interplay is inherent in the nature of derived words. By definition, derivational morphology creates new words. So, the prefix re- and the bound root ceive may be said to combine to create receive. The prefix re- and the free morpheme fill combine

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



to create the form refill, and the prefix re- and the free form search combine to form the prefixed word research. Similarly, on the suffix side, the free form teach and the suffix ‑er combine to create the word teacher. And, then, there are the forms such as butcher. In some way, butcher is not morphologically decomposable. And, yet, it seems intuitively reasonable to expect that full morphological command of English requires that some links (or at least parallels) exist between forms such as teacher and butcher, and perhaps even refill and research. Indeed, the seminal studies of Taft and Forster (), which can be said to have begun psycholinguistics research on morphological processing, focused precisely on stimuli such as refill and research. The position advanced by Taft and Forster () was that all prefixed words are subject to automatic and obligatory morphological decomposition. This position was extended and refined by Taft (, ). The logic behind the hypothesis incorporates the insight that prefix stripping creates storage efficiency by reducing the number of elements that need to be represented in the mental lexicon and also provides a natural way to account for how words such as fill and refill are linked in the mental lexicon. The initial proposal of Taft and Forster () was a bold hypothesis concerning how derived words are processed. But, in the research that followed it became clear that, even if morphological decomposition was performed automatically and obligatorily (given the language user’s state of knowledge), it does not follow that derived forms do not have full forms in the mental lexicon. This need to develop models that capture relations among whole-word and constituent representations has been addressed in a number of models including the Supralexical Model of Giraudo and Grainger () and parallel route models in which whole-word access and decomposition compete in a type of horse race (e.g. Bertram, Schreuder, and Baayen ; Schreuder and Baayen ; see also Gagné and Spalding, Chapter  this volume). Which route will turn out to be faster and be the one that is employed for a particular word will depend on a large number of factors. The lexical frequency of the stem and the derived form would certainly be among these factors. And, this would have important consequences for the interpretation of experiments such as the ones reported by Silva and Clahsen () in which a typically lower-frequency affixed form (e.g. kindness) served as a prime for a (typically) higher-frequency target root word (e.g. kind). These methodological details also have consequences for another issue that we raised in the section on inflectional morphology. This is the issue of how different methods may target different levels of representation. It is quite possible that among native speakers of English, the words refill and research undergo automatic and obligatory morphological decomposition. In other words, they are both perceived to be prefixed by re-. But, it does not follow from this that the two words will have the same types of representations in the mind and be linked to other words containing the prefix re- in the same manner. For most native speakers of English, the word refill is semantically transparent. The word research, despite the fact that, diachronically, it does contain the intensifying prefix re-, is not perceived to be semantically transparent. Thus, in one case, morphological decomposition is semantically helpful. In the other case it is not. Issues of semantic transparency in stimuli such as refill and research bring to the foreground an important and complicating factor in understanding the relationship between morphological processing in a second language and in a first language. It seems very likely the case that if a speaker of English does not have a mental link between teacher

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

and teach, some aspect of morphological ability is not present. But now, what about butcher and butch, or corner and corn? There is considerable evidence that native speakers of English show access to the word corn, as a result of seeing corner (e.g. Rastle, Davis, and New ; Lehtonen, Monahan, and Poeppel ; Lavric, Elchlepp, and Rastle ). In other words, using a priming task that is very similar to the one that we described for Silva and Clahsen (), we find that, for native speakers of English, corner primes corn in the way that kindness primes kind. But does it follow that second language speakers should be seen as less developed in terms of English morphology if they do not show this effect? What seems likely to stand behind the effect itself is that globally adaptive processes (i.e. being able to automatically see inside any string that can be analysed as containing an existing stem and suffix) may have locally non-adaptive consequences. The word corner is not composed of corn+er, and evidence from long-term morphological priming suggests that the corner–corn effect may reflect peripheral access processes (Rueckl and Aicher ), not central ones. Derived words can be processed as both morphological structures and as whole words. Research on morphological processing has not yet established what the range of ‘styles’ may be for native speakers of a language (but see recent work by Kuperman and Van Dyke , ), so we also do not yet have a framework within which it is possible to understand what this might mean for a second language user to be ‘less morphological’ than a native speaker. But the prospects for finding out promise to be extremely revealing of the dynamics of morphological processing. Because associations among words that are linked by derivational morphology are going to be related to the size of a person’s multimorphemic vocabulary, which is most often smaller for non-native speakers than for native speakers, there are reasons to expect that experiments on the processing of derived words among second language users will be capturing a system in progress. Notwithstanding the comments above regarding Ullman’s () claim and the possibility that second language speakers may adopt a more declarative approach to the processing of derived words, it is also possible that second language speakers may turn out to be more morphologically oriented than native speakers. The reason for this is that they will know fewer multimorphemic words and thus will need to rely on morphological processing to retrieve the stem forms that aid in comprehension. What may follow from this is that their need to employ morphological patterns to enhance comprehension and meaning creation will also create pressure toward a less constrained morphological system, thus explaining the findings of decreased ability to detect and reject non-existing morphological patterns that were observed by Friedline (). Thus, second language users could be found to ‘do morphology’ more than native speakers do and, at the same time, be less good at it.

. C 

.................................................................................................................................. As has been the case in research on derivational morphology, research to date on compound processing in a second language has tended to link second language performance to the dominant questions in the psycholinguistic literature on compound processing among native speakers (Gagné and Spalding, Chapter  this volume). These questions have included the following:

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



(a) How are the morphological constituents of compounds identified? (b) How and under what conditions are the constituent morphemes of compound words activated as constituent morphemes? (c) How and to what extent is there spread within the mental lexicon to morphologically related words or compounds? And related to that matter, to what extent does the semantic transparency of the compound make a difference? Question (a), the question of how the morphological constituents of compounds are identified, brings to the foreground a very important difference between compound words and both inflectionally and derivationally affixed words. That is that the major morphological constituents of a compound word are typically drawn from the open-class set of lexical items in a language. The number of roots in a language is always going to be larger than the number of affixes. So, whereas a language user can (and likely must) develop an efficient and effective way to identify prefixes and suffixes in a language, this is not a strategy that will work well in the identification of the morphological elements of a compound word. This makes the challenge of compound processing very different from that of prefixed and suffixed word processing. The automatic prefix stripping hypothesis of Taft and Forster () was based on the supposition that prefixes such as re- in English could be easily and reliably identified and separated from stems such as fill in the word refill. Similarly, the research that we noted above on automatic suffix processing and the corner–corn effect was based on the supposition that suffixes, as perceptually salient final lexical substrings, trigger morphological decomposition. In contrast, the identification of constituents of a compound cannot depend on such a special store of identifiable morphological elements. Thus, grouping morphemes into compound structures can constitute a relatively complex operation. Some of this complexity is evident in reports on projects that have attempted automatic compound parsing across languages (see papers in Verhoeven et al. ). Question (a) was addressed in a study of compound processing by Lemhöfer, Koester, and Schreuder () in a lexical decision task with Dutch compounds. Because Dutch compounds are typically written without spaces between the constituents, finding the constituent boundaries can constitute a challenge for the language user. They reasoned that second language users would make use of orthotactic parsing cues for morphological decomposition. To use an example from English, there is a way in which a compound such as football is much easier to parse than compounds such as footrest, footstool, or foothold. In the case of football, the fact that ‘tb’ will not be a possible morpheme onset or offset in English functions as an orthotactic cue that may make parsing somewhat easier, just as the presence of stool and tool as possible second constituents of the compound footstool may complicate parsing and the presence of the digraph ‘th’ in foothold may make access to the morpheme boundary more difficult. They found that both native speakers of Dutch and second language speakers benefited from the presence of orthotactic cues, as compared to compounds without special orthotactic cues (equivalent to footrest in English). For native speakers, the orthotactic cue was helpful for long compounds (those containing more than ten letters), but not short ones (those containing fewer than ten letters). For non-native speakers, all compounds benefited from the orthotactic cues. The authors concluded that native speakers only decomposed the long compounds, but that non-native speakers

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

adopted a morphological parsing strategy for all compounds. They saw this finding as offering counter-evidence to the claims of Clahsen et al. () regarding the preference for declarative processing in L. Thus, although the study was designed to address Question (a), the question of how morphological constituents are identified, Lemhöfer, Koester, and Schreuder () also claimed that their study addressed Question (b), the question of how and under what conditions the constituent morphemes of compound words are activated as constituent morphemes. Question (c) above—concerning links in the mental lexicon among compound constituents—is the most complex of all. Priming evidence has shown that there are links in the mind formed among compounds that share morphological elements. So, the words ladybug and bedbug would be expected to be organized as a morphological family. But then what about words such as litterbug and humbug. These are cases of semantic opacity, but of a type that is distinct from the opacity associated with corner. The reason for this is that corner is not made up of corn+er just as brother is not made up of the morphological elements broth+er. But there is a way in which litterbug, clutterbug, and even jitterbug (referring to the American dance of the first half of the twentieth century) are compounds with the morphological head bug, despite the fact that none of the compounds refer to types of bugs. The phenomenon of semantic transparency in compound processing has been investigated in many studies. Sandra () found effects of decomposition among transparent Dutch compounds but not among opaque ones. Libben et al. () found decomposition effects for both transparent and opaque compounds in English. Opaque words, however, showed greater processing difficulty (see Libben and Weber  for a review). In a study of L processing among children who were native speakers of Chinese and who spoke English as a second language, Cheng, Wang, and Perfetti () reported an auditory lexical decision task that contrasted semantically transparent and opaque English compounds. In this task, participants heard both real-word and nonword stimuli. For each stimulus, they were asked to decide as quickly as possible whether the stimulus was a real word of English (i.e. perform a lexical decision). They found that transparent compounds (e.g. bedroom) showed greater lexical decision accuracy, over and above the effects of familiarity. They also found an effect of compound lexicality in L. So, compound words in English that were also compound words in Chinese were responded to more accurately. However, there was no interaction between L lexicality (i.e. whether it is a real word of the language) and semantic transparency in L. This led the authors to conclude that young bilingual speakers were sensitive to differences in transparency, but that there was constituent-based access for both compound types. This second conclusion draws on the fact that Chinese compounds accentuate the morphological component. If there were a difference in the extent to which processing of English transparent and opaque compounds were constituent-based, one would expect to see an interaction between the two variables. A very interesting study that also addressed Question (c) is reported by Mulder et al. (). Indeed, the article begins with the observation that is most relevant to Question (c): Reading a word is not just looking up this word in a dictionary. If it were that simple, word processing would be affected only by the number of words that share their beginnings and not by the word’s more complex relationships to other words in the lexicon on dimensions such as orthographic or semantic relatedness. It turns out that during reading a word activates not

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

     



only its own representation in the mental lexicon, but many other lexical representations as well, via a system of relationships that are not necessarily strictly word-form related. Words are not isolated units, but parts of larger networks. (Mulder et al. : )

The study involved an English visual lexical decision task conducted with native speakers of English and Dutch speakers of English as a second language. The authors report that for both monolingual and L participants, there were effects associated with the frequency of the stimulus word, the number of words that share elements with the stimulus word (referred to as its primary family size), and even the number of words that share elements with words in the primary family of the stimulus word (referred to as its secondary family size). Mulder et al. () argue that this demonstrates that bilingual language users are sensitive to the broad morphological network in their second language. They also claim that the approach of Naive Discriminative Learning (Baayen et al. ) offers a framework within which to understand secondary family size effects and why they are linked to identical cognates for the Dutch-English participants.

. N 

.................................................................................................................................. The domain of L morphology in second language learners functions as a microcosm of other linguistic domains. In order to understand and explain the data which have been reported over the past thirty-five years, we need to understand that we cannot invoke a privileged ‘morphological’ solution; we must consider storage mechanisms, representational properties, and processing algorithms. The interfaces with other representational domains (phonology, syntax, semantics, pragmatics) are critical, and the factors which influence performance are also critical. Similarly, when we look at the processing of language in non-native speakers, we need to bring to bear the full arsenal of data elicitation techniques and data analysis to get a picture of why people perform the way they do. As we noted at the outset of this chapter, the study of second language morphology had its beginnings in the domain of inflection with the observation that second language learners sometimes left morphemes out of their L production. This initiated the development of research that sought to account for the patterns observed and then, in a deeper and more comprehensive way, to describe and explain the patterns underlying second language morphological knowledge and performance. This is reminiscent of Chomsky’s () anecdote about Galileo, where Chomsky notes that science began when Galileo stopped accepting that an apple fell because it was returning to its natural place. Why should it fall down and not up? By starting to ask these questions, the world becomes full of complexities. Which second language learners are we talking about, omitting what sorts of things, in what types of activities? As our discussion above has shown, many of these questions have been addressed. Many more await the expansion of research on morphological processing in a second language. We are still very much at the beginning, particularly in the study of the representation and processing of derived words and compounds in a second language. But, in our view, the prospects are excellent for rapid advancement of knowledge.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



    

Certainly, methodological diversity will play a large role in that advancement. In the experimental studies surveyed in this chapter, the dominance of the lexical decision task to probe lexical knowledge is very apparent. In many ways this is motivated by the advantages associated with using a simple methodology in the study of second language processing that has been well benchmarked in the psycholinguistic study of lexical processing in a native language. In recent years, however, there has been a very noticeable tendency in the field of lexical and morphological processing to ‘embrace complexity’ (Libben, Westbury, and Jarema ). Moving to the domain of statistical analysis, it is our expectation that the growing popularity of statistical techniques such as mixed effects modelling (Baayen ), which can consider both stimulus and participant factors within the same analysis, will enable the development of models that consider the complex ways in which factors associated with learners, stimulus factors, and situational factors interact. Developments such as these will serve to move the study of morphological processing in a second language to the core of the scientific study of lexical representation and processing. In our view, this is where it should be. Most of the world’s population speaks a second language. Most of the words in a language are multimorphemic. Thus the lexical systems of most humans need to be able to accommodate morphological differences that exist between languages in one brain and one processing system. By understanding how this is accomplished and maintained, by understanding how it changes as a result of second language acquisition and ongoing development, we can gain access to the dynamic nature of human language representation and processing.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

  ......................................................................................................................

    ......................................................................................................................

 . ´   . 

. O

.................................................................................................................................. W show sound–meaning regularities and this fact has generated huge amounts of research within the field of linguistics and the field of psycholinguistics. Within the field of linguistics, morphology gained prominence in the late s and s (see, for example, Marchand ; Adams ; Halle ; Aronoff ; Bauer ) as linguists sought to understand the interface between the lexicon, phonology, and syntax as well as the systems that allow for word formation (or, more precisely, lexeme formation). Since that time, questions about the nature of morphological units and the way in which morphological information interacts with other linguistic levels have provided rich sources of inspiration for research within a variety of disciplines. Research in linguistics concerning morphological structure has carried over to psycholinguistic investigations into the way in which language is represented and processed in the human mind. In this chapter, we focus on the various approaches that psycholinguistic theories have adopted with respect to morphological representation. In particular, we focus on theories that are relevant for visual word identification (for a review about production, in general, and acquisition in particular, see Blom, Chapter  this volume). After discussing the various theoretical approaches within psycholinguistics, we provide an overview of the empirical evidence that has been used to evaluate these competing theoretical approaches.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´   . 

. B   

.................................................................................................................................. Psycholinguistic research focuses on the cognitive mechanisms and psychological representations that underlie human language. The scope of psycholinguistics is extremely broad in that it attempts to explain how language is acquired, represented, and processed (for overviews, see Altmann ; Traxler and Gernsbacher ). The area within psycholinguistics that is most central to the topic of morphological theory is the work on the mental lexicon. The mental lexicon refers to the mental representations of words including information about meaning, pronunciation, orthography, and syntactic characteristics. Questions concerning morphology are gaining interest among researchers studying the mental lexicon. In particular, there is ongoing and vigorous debate concerning the role of sub-word units in the processing of morphologically complex words (i.e. multi-morphemic words). That is, are such words processed as whole units or are they decomposed into smaller units? If they are decomposed into smaller units, what is the nature of these units— are the units morphological or are they only orthographic or phonological? To determine whether and how morphological information is represented and used in the mind, researchers have used a variety of research methods drawn from cognitive psychology and neuropsychology. In a typical experiment, the researcher manipulates a factor (or factors) that is hypothesized to be involved in language processing and then measures whether this manipulation does, indeed, affect ease of processing. The underlying assumption for psycholinguistic research is that mental processes take time and that one can draw inferences about the structure and processing of language by observing which variables affect the time it takes to perform a given task involving that word. For example, if morphological structure is explicitly represented within the mental lexicon, then experimental manipulations that affect the availability of constituents of that structure (i.e. morphemes) should yield differences in the time required to process a multi-morphemic word in a particular task, such as reading, naming, or lexical decision. In §., we provide an overview of some of the empirical findings from these various tasks that have been used to test competing theories.

. P    

.................................................................................................................................. Several theoretical approaches have been used to explain how morphologically complex words are represented and processed. Key differences among theories of word processing arise out of a more general debate concerning the balance of lexical storage versus computation (see also Blom, Chapter  this volume). Storage refers to the direct use of a single stored representation, whereas computation refers to the use of multiple pre-existing (stored) representations, to determine, for example, the meaning of a word. To illustrate,

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



the meaning of the word walker might be stored (i.e. have its own representation) or the meaning might be computed using the morphemes walk and er. A wide array of articles discuss this fundamental problem (e.g. Pinker ; Sandra ; Bertram et al. ; Bertram, Schreuder, and Baayen ; Libben ; Baayen ; Kuperman et al. ; Kuperman, Bertram, and Baayen ; Ji, Gagné, and Spalding ). Due to the different emphasis placed on storage and computation, the theoretical approaches differ substantially in terms of the extent to which morphological structure plays a role in the processing of morphologically complex words. Five general approaches have emerged. The first approach attempts to minimize computation and posits that all words are stored and recognized as whole units. In this approach, the morphemes of a complex word are not used to access the word nor do they play a large role in word processing (Manelis and Tharp ; Butterworth ; Lukatela, Carello, and Turvey ; Janssen, Bi, and Caramazza ). Thus, in this view, multi-morphemic words are represented and processed as though they were monomorphemic. A second approach posits that morphemes are represented in the mental lexicon and that complex words are decomposed into their constituent morphemes (Taft and Forster , ; Laudanna and Burani , ; Dell ; Frauenfelder and Schreuder ; Chialant and Caramazza ). This approach, however, still assumes that whole-word representations of complex words can be stored in the mental lexicon and, consequently, there is no psycholinguistic theory that is the direct equivalent of linguistic approaches, such as Distributional Morphology (e.g. Halle and Marantz ), that posit the use of morphemes but not of lexical units. Researchers have debated whether decomposition occurs before or after the access of the complex word and, consequently, decomposition theories vary in terms of when the representations of the morphemes become available (see Kuperman, Bertram, and Baayen , for a review). Sublexical theories posit that a complex word is decomposed into its morphological units very early in processing and that these representations are then used to access the representation of the whole word (e.g. Taft and Forster , ; Rastle, Davis, and New ; Taft ; Longtin and Meunier ; Fiorentino and Poeppel ; Rastle and Davis ). Supralexical theories also postulate the existence of a morphological decomposition mechanism. However, unlike the sublexical theories, supralexical theories posit that morphemes are accessed at a later stage, after the representation for the complex word has been accessed (Grainger, Colé, and Sequi ; Giraudo and Grainger ; Diependaele, Sandra, and Grainger ). By the supralexical view, morphemes are used to capture the correspondence of form and meaning within sets of morphologically related words, but are not used as a basis for word access. Instead, the representations of morphemes result from multiple interactions between word forms and word meanings. Consequently, sublexical and supralexical theories differ in terms of their view of the nature of morphemic representations. In the sublexical view, the morphemic units are closely connected to the surface form of the constituents comprising a complex word, whereas in the supralexical view, morphemic units are more abstract and correspond more closely to base-lexemes rather than to surface forms. Another variation of decomposition theory, such as the one proposed by Diependaele, Sandra, and Grainger (), posits that two systems are in operation; one system supports sublexical activation in the early processing stages and the other system supports supralexical activation in later stages.

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´   . 

A third theoretical approach hypothesizes that complex words are accessed via the whole word representation or via the constituent morpheme representations and that those two routes operate in parallel (e.g. Caramazza, Laudanna, and Romani ; Frauenfelder and Schreuder ; Schreuder and Baayen ; Baayen, Dijkstra, and Schreuder ; Baayen and Schreuder , ; Diependaele, Sandra, and Grainger ). According to this approach, the time required to process a word is determined by which route finishes first. The success of a particular route depends on many factors including the whole-word frequency and constituent frequencies. A fourth theoretical approach hypothesizes that information about the whole word and about the constituents is activated and that these two sources of information interact. This multiple-route approach (e.g. Kuperman et al. ; Kuperman, Bertram, and Baayen ; see also Libben ) is a relatively recent development, and represents the culmination of a trend in which theories have been increasing both the number of representations and the amount of computation performed over those representations, and in this sense may be a rejection of the initial ‘storage versus computation’ framing of the problem. Finally, some theories have argued that morphemes do not exist as a separate level of representation and, instead, that there are direct mappings between the co-activation of formal (orthographic/phonological) and semantic units. A naive discriminative learning model, which is an incremental learning process based on Rescola–Wagner equilibrium equations (Rescola and Wagner ), is an example of this theoretical position (see Baayen et al.  for the application of this model to Serbian inflectional paradigmatic effects). In such a view, all morphemes are epiphenomenal and have no separate representations. That is, they have no real existence in the mental lexicon or in the language system, but are ‘analytic fictions’. Naive discriminative learning models fit well with information-theoretic approaches to morphological complexity (such as Juola  and Nichols ) as well as work by Moscoso del Prado Martin, Kostić, and Baayen (), which view the complexity of a word as a function of the amount of information contained in a word and in the morphological paradigm. Thus far, there is no strong agreement about which particular theory is most viable. However, the bulk of the empirical evidence points to the use of decomposition during processing. The empirical evidence supports theories that posit morphological decomposition, in that several studies have demonstrated an influence of morphological structure on the processing of complex words (see §. for details). However, there is still much controversy surrounding the nature of decomposition. Whether decomposition occurs before or after the access of the complex word is still hotly contested (e.g. Diependaele, Sandra, and Grainger ; Longtin and Meunier ). Researchers have also debated whether decomposition occurs for all words or whether it occurs only for semantically transparent words, that is, words whose meaning can be derived from the meaning of the constituent morphemes (Giraudo and Grainger ). The question of whether morphological decomposition is sensitive to semantic transparency provides insight into the point at which decomposition occurs as well as into whether decomposition is affected only by the morphological surface form of a word. Researchers have also disagreed about whether decomposition is purely morpho-orthographic or whether semantic information is also used (Rastle, Davis, and New ; Beyersmann, Castles, and Coltheart ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



. R   

..................................................................................................................................

.. Overview Although early work by Murrell and Morton () suggested that suffixes and free morphemes become available during the recognition of morphologically complex words, one of the main challenges for psycholinguists has been to isolate morphological effects (see Feldman  and Rastle and Davis  for discussions of this problem). This difficulty arises because words that overlap in terms of morphemes also tend to share similar phonology, orthography, and meaning. Even so, it has been possible to identify effects of morphology that are distinct from semantic, orthographic, and phonological effects (see, e.g., Bentin and Feldman ; Roelofs and Baayen ; Zwitserlood, Bölte, and Dohmes ; Assink and Sandra ; Baayen and Schreuder ; Frost et al. ; Gumnior, Bölte, and Zwitserlood ; Rastle and Davis ). In this section, we provide an overview of some of the relevant data that have been used to examine the nature of morphological processing.

.. Whole-word frequency and morpheme frequency effects Word frequency refers to how often a word has been encountered in a given language. Various text corpora (e.g. CELEX, Baayen, Piepenbrock, and van Rijn, ; SUBTLEX, Brysbaert and New ) have been used to obtain measures of word frequency, under the assumption that a word’s frequency in the corpora is proportional to its frequency in an average language user’s experience. Many studies have demonstrated that high frequency words are processed more quickly than low frequency words (Rubenstein and Pollack ; Scarborough, Cortese, and Scarborough ; Gernsbacher ). If the morphological constituents of a word are involved in processing, then one would expect to see some influence of the frequency of those constituents, as well. Indeed, there are several lines of research demonstrating that the processing of multi-morphemic words is influenced by morpheme frequency (Taft ; Bradley ; Burani and Caramazza ; Colé, Beauvillain, and Segui ; Baayen, Dijkstra, and Schreuder ; Alegre and Gordon b; Meunier and Segui ; Niswander, Pollatsek, and Rayner ). However, the influence of morpheme frequency is rather complex in that it appears to depend on word type (i.e. derived, inflected, or compound) as well as on whole-word frequency. Derived and inflected words, in addition to showing the usual effect of word frequency, show effects of stem frequency, in that responses to words with higher frequency stems are faster than responses to words with lower frequency stems. The influence of stem frequency has been found in a number of languages including English (Taft ), French (Beauvillain ), Italian (Burani and Caramazza ; Burani, Salmaso, and Caramazza ), Dutch (Baayen, Dijkstra, and Schreuder ), and Finnish (Lehtonen et al. ). The effect of stem frequency on the processing of derived words has been found not only for stems that are semantically transparent, but also for stems that are semantically opaque (Holmes and

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´   . 

O’Regan ; Schreuder, Burani, and Baayen ), suggesting that the stem frequency effects are morphological rather than semantic. More recent work has suggested that the way in which stem frequency influences processing time depends on the whole-word frequency. For low frequency words, the influence of stem frequency is facilitatory (i.e. words with higher stem frequency are processed more quickly). In contrast, for high frequency words, the influence of stem frequency is inhibitory (i.e. words with higher stem frequency are processed more slowly, see Baayen, Wurm, and Aycock ; Kuperman, Bertram, and Baayen ). Interestingly, the influence of word frequency also depends on the morpheme frequency. For example, Colé, Segui, and Taft () found an influence of whole-word frequency on the processing of derived words only when the word was more frequent than its root morpheme. Similarly, the relative frequency of a word and its morphemes affects the extent to which language users perceive a derived word as being decomposable (Hay ). As with derived and inflected words, compound words take less time to process when they contain high frequency morphological constituents than when they contain low frequency constituents (e.g. Burani, Salmaso, and Caramazza ; Andrews ). However, this effect (as was the case for derived and inflected words) depends on several other factors, such as the frequency of the whole word. In general, constituents with high word frequencies (or larger morphological families) are associated with faster response times for low frequency compounds, but are associated with slower response times for high frequency compounds (see Kuperman , for a summary). In addition, the effect of the frequency of the constituent appears to depend on the semantic transparency of the compound; Ji, Gagné, and Spalding () found that higher first-constituent frequency was associated with faster lexical decision times for semantically transparent compounds but with slower times for semantically opaque compounds. Questions about which constituent exerts the strongest influence on the processing of compound words have not yet been resolved. Although some studies on compound words have found that the frequency of both constituents matters (e.g. Zwitserlood ), others have only found frequency effects of the first constituent (e.g. Taft and Forster , Experiment ; van Jaarsveld and Rattink ), whereas still others have found effects only of the second constituent (Andrews ). Furthermore, the influence of a constituent’s frequency might be influenced by the other constituent’s frequency; Juhasz et al. (), for example, note that the influence of the frequency of the first constituent is stronger for compounds that have low frequency second constituents. This discrepancy about whether the first or second constituent produces the stronger effect of frequency appears to be due to the extent to which the task is more reflective of early versus later stages of processing. Some measures, such as lexical decision latencies, are more reflective of later processing stages, whereas others, such as the first fixation durations in eye-tracking studies, are more reflective of initial stages of processing. Earlier stages of processing are more influenced by properties of the first constituent, whereas later stages are more influenced by the second constituent (Andrews, Miller, and Rayner ; Duñabeitia, Perea, and Carreiras ). Which constituent exerts the greater influence in the recognition of the whole compound changes across the stages of processing. In sum, then, we might conclude, at least temporarily, that constituent morphemes are activated during the processing of morphologically complex words, and that overall performance on a complex word depends on the relationships among the frequencies of

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



the whole word and the constituents. In general, it appears that activation of the constituents improves the processing of low frequency complex words, but somehow interferes with the processing of higher frequency complex words. This could suggest that the constituents play multiple roles during the processing of complex words. For example, in the pre-lexical decompositional view, morphemes may initially play a role in which their activation facilitates access to the complex word form, but if the morphemes remain activated, they could then compete with the complex word, perhaps in terms of both the word form and semantic activation. Thus, when a person processes a low frequency complex word, the morphemes could be very helpful in accessing the word form, so that the overall influence is facilitatory. On the other hand, when a person processes a high frequency complex word, access to the word form should be relatively quick and easy, so that the primary effect of the morphemes’ activation is competitive, and thus the overall effect of the morphemes on complex word processing is inhibitory. A similar kind of explanation might account for some of the effects of semantic transparency in that activation of the constituents might be problematic for opaque compounds, but not for transparent ones (see Ji, Gagné, and Spalding  for results showing that lexical decision latencies for opaque compounds were slower than for frequency-matched transparent compounds).

.. Morphological priming To better understand the influence (if any) that morphemes exert during the processing of complex words, researchers have used a priming procedure in which a word (called the prime) is presented prior to a target word. By systematically varying properties of the prime (for example, by varying whether the prime is morphologically related to the target or whether the prime is only orthographically related to the target), researchers test hypotheses about what type of information is available during word recognition. Studies using a priming procedure (e.g. Forster et al. ; Grainger, Colé, and Segui ) have consistently revealed that exposure to a morphologically related word aids the subsequent processing of a complex word; for example, dark aids the processing of darkness. Similarly, several studies (Longtin, Segui, and Hallé ; Marslen-Wilson et al. ; Rastle et al. ) have shown that prior presentation of a morphologically related prime (e.g. friendly) aids processing of the stem (e.g. friend). Compound words produce similar findings (Sandra ; Zwitserlood ). For example, viewing kerkorgel ‘church organ’ speeded subsequent responses to orgel ‘organ’ and to kerk ‘church’. Likewise, the processing of a constituent aids the subsequent processing of a compound with that constituent (Lima and Pollatsek ; Inhoff, Briihl, and Schwartz ; Jarema et al. ; Libben et al. ). For example, prior exposure to tea facilitates the processing of teaspoon. Although there has been ample evidence demonstrating morphological priming, there are several factors that impact the size (and even existence) of the priming effect. First, the frequency of the prime word influences the amount of benefit produced by that prime; high frequency derived word primes were more effective than low frequency primes in facilitating the subsequent processing of a stem (Meunier and Segui ; Giraudo and Grainer ). Furthermore, some studies, such as Giraudo and Grainger () which was conducted with French words, have found that low frequency derived words (e.g. amiable) did not produce any benefit to the processing of the free morpheme (e.g. ami) relative to

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´   . 

orthographically controlled primes in which the target morpheme was not a morphological constituent (e.g. amidon). The impact of the prime’s frequency is influenced by word type. For example, Raveh () found differential priming effects from high frequency inflected and derived words, but equivalent priming effects from low frequency inflected and derived words; responses to targets were faster when preceded by a prime that was a derived word than when preceded by a prime that was an inflected word, but only when the primes were high frequency words. When the primes were low frequency words there was no difference in response time following the derived and inflected primes. Second, the priming effect is sensitive to the semantic transparency of the prime. In some cases, semantically related and unrelated primes produce similar benefits (see Rastle, Davis, and New ; Rastle and Davis ; Kuperman ). However, whether opaque primes facilitate the processing of the target is more controversial in that the results across various experiments have been inconsistent. For example, some studies using a cross-modal priming task (Marslen-Wilson et al. ; Longtin, Segui, and Hallé ) have found facilitation for morphological relatives that are semantically related (e.g. hunter-hunt), but not for morphological relatives that were semantically opaque (e.g. gingerly-ginger). Other studies that used masked priming (e.g. Rastle, Davis, and New ; see also Rastle and Davis  for a summary of previous studies on this issue) found equivalent priming from primes that were morphologically structured and were semantically related to the stem (e.g. darkness-dark) and from primes that had an opaque (i.e. pseudo) morphological structure (e.g. corner-corn) and from primes that were orthographically related but not morphologically structured (e.g. brothel-broth). Longtin and Meunier () even found facilitation from morphologically structured nonwords (e.g. darkism-dark), as did McCormick, Rastle, and Davis (). The priming effect is not affected by whether the nonword is semantically interpretable or not. These results suggest that morphological decomposition is not affected by lexicality or by semantic interpretability. In contrast, other studies have found that opaque primes were less effective than transparent primes. Diependaele, Sandra, and Grainger () also used cross-modal priming (with visual primes and auditory targets) and observed priming effects for both transparent and opaque words. In their study, the priming effect was larger for semantically transparent words than for opaque words. Similarly, Feldman et al. () found that responses to complex words (e.g. casualness) were faster following a semantically transparent prime (e.g. casually) than after a semantically opaque prime (e.g. casualty). Morris et al. () report the same pattern of results using ERP data. Although still controversial, the procedural and task differences across the various experiments appear to account for some of the discrepancies in the results concerning whether or not semantic transparency affects morphological decomposition (see Davis and Rastle ) and research by Duñabeitia et al. () (see also Paterson, Alcock, and Liversedge ) has suggested that morpho-orthographic effects are not fixed but instead are sensitive to the particular task being used. In sum, however, it appears that morphological decomposition occurs early in word recognition; opaque derivations do not produce facilitatory priming when the prime is fully perceptible (e.g. Drews and Zwitserlood ; Longtin, Segui, and Hallé ; Gonnerman, Seidenberg, and Andersen ; Rueckl and Aicher ), but do when the prime is not fully perceptible. The results concerning the role of semantic transparency for compound words are less controversial than the results concerning other types of complex words. Compound words

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



are effective primes for their constituents, irrespective of whether the morphemes are semantically transparent. Zwitserlood () found that viewing kerkorgel ‘church organ’ speeded responses to kerk ‘church’, and viewing klokhuis ‘apple core’ (but literally ‘clockhouse’) speeded responses to klok ‘clock’. In contrast, she did not find any evidence of priming when there was only orthographic overlap, but no morphological constituency, between the prime and target: kerstfeest ‘Christmas’ did not facilitate the processing of kers ‘cherry’. Recognition of a compound also benefits from recent exposure to a morphological constituent (Monsell ; Sandra ). For example, rope aids the recognition of tightrope and butter aids the recognition of butterfly even though butter is semantically opaque in butterfly. Monsell also found facilitation for items that did not have a morphological structure (e.g. fur aided furlong). Finally, word type appears to impact the priming effect. As noted above, priming effects involving inflected and derived words appear to be sensitive to word frequency, semantic transparency, and priming task, while priming effects involving compound words appear to be largely insensitive to these factors, so far as we can determine.

.. Transposed-letter effect Several studies (Forster et al. ; Perea and Carreiras ) have shown that monomorphemic words (e.g. judge) are more quickly recognized when they are preceded by a brief presentation of a nonword that contains a transposed letter pair (e.g. jugde) than when preceded by a control nonword (e.g. jupte). This finding has provided another basis for examining the impact of morphemes on word recognition. Most relevant is the finding that the ability of a nonword with transposed letters to activate its base word is reduced when the letter transposition occurs across a morpheme boundary, which suggests that morphemic structure does play a role in the processing of complex words. Christianson, Johnson, and Rayner () found that the naming time for compound words (e.g. sunshine) was facilitated relative to a control condition (e.g. sunsbine) when preceded by a prime for which the transposed letters occurred within a morpheme (e.g. sunhsine) but not when the transposed letters crossed the morpheme boundary (e.g. susnhine). These results extended to derived words (e.g. boaster) and pseudocompounds (i.e. monomorphemic words that contain morphemes that are not used productively, such as mayhem which happens to contain may and hem but has monomorphemic structure). Duñabeitia, Perea, and Carreiras () also found a morphemic boundary difference for lexical decision times for prefixed and suffixed words in Spanish and Basque; there was a transposed-letter effect when the transposition occurred within a morpheme, but not when the transposition crossed the morpheme boundary. However, the data on this issue are not consistent because other researchers have found that transpositions across morpheme boundaries do produce facilitation (e.g. Perea and Carreiras ; Rueckl and Rimzhim ; see also Beyersmann, McCormick, and Rastle  for a discussion of data on this issue). Taken as a whole, the research on transposed letters suggests that morphemes become available during the processing of complex words. Subsequent work has examined whether decomposition of an affixed word into its constituent morphemes is triggered by the presence of affixes, free morphemes, or both. The evidence suggests that decomposition

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´   . 

of a complex word is triggered purely by the presence of affixes, rather than the presence of free morphemes. For example, Beyersmann, Castles, and Colheart () examined the role of transposed letter priming. Their test varied whether the nonword was suffixed (e.g. wranish–warn vs. wranel–warn). They found that priming occurred for the suffixed but not for the non-suffixed words. These results suggest that morphological decomposition of affixed words is based on the early and automatic recognition of an affix.

.. Morpheme position effect Given that morphemes are available during the processing of complex words, the question arises as to whether morpheme processing is position-specific or whether the activation of morphemes is position-independent. The answer to this question depends, to some extent, on the type of morpheme. Suffix identification appears to be strongly position-specific. For example, Crepaldi, Rastle, and Davis () found that morphologically complex nonwords (e.g. gasful) take longer to reject than orthographic controls (e.g. gasfil). However, when the constituents were reversed (e.g. fulgas vs. filgas), there was no difference. This result suggests that the presence of a suffix morpheme (e.g. ‑ful) only affects the processing of the target nonword when that morpheme appears in the word-final position. In other words, when a morpheme that is always used as a suffix appears in a different position (e.g. at the beginning of a word), it is not processed as a morpheme (i.e. it does not differ from other orthographic strings). In contrast, the processing of free stem morphemes in compounds appears to be much less strongly position-specific. For example, Zwitserlood, Bölte, and Dohmes () found that picture naming latencies were faster when the distractor words (presented seven or more trials earlier) were morphologically related to the picture and that this facilitation occurred regardless of the position of the shared morpheme. That is, both rosebud and tearose speeded the naming of a picture of a rose, suggesting that processing the stem morphemes, in either position, activated the full word corresponding to the stem morpheme (i.e. ____ + rose or rose + ____ activated the word rose). Importantly, it seems that the activation of the stem morpheme in either position can also activate that stem morpheme in the opposite position within a compound word, as well. Several studies (Taft, Zhu, and Peng ; Shoolman and Andrews ; Crepaldi et al. ) have found that reversed compounds take longer to reject in lexical decision experiments, compared to matched nonword controls. For example, moonhoney, formed by reversing the compound honeymoon, takes longer to reject than moonbasin. This finding suggests that even though they are in a different position compared to the compound, the stem morphemes activate the representation of the compound (i.e. honeymoon). The activation of the compound, in turn, makes it more difficult to respond ‘nonword’ to the reversed compound. In addition, Crepaldi et al. () found that masked presentation of the reversed compound aids the subsequent identification of the compound, but that this result did not occur for monomorphemic words (e.g. rickmave does not affect the processing of maverick). Hence, there is direct evidence that the stem morphemes can be activated by repetition, regardless of position, and that the effect does not arise solely by letter repetition. However, other evidence suggests that morphological processing might result in position-specific representations, even for stem morphemes, although not to the same

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



degree as bound morphemes such as suffixes. Duñabeiti et al. () conducted priming experiments in Basque and found facilitation from compounds that shared a constituent in a different position; responses to mendikate (mendi+kate, lit. ‘mountain+chain’, meaning ‘mountain range’) were faster when preceded by sumendi (su+mendi, lit. ‘fire+mountain’ meaning ‘volcano’) even though the shared constituent (mendi) was in a different position, replicating the effects discussed in the preceding paragraph. However, the facilitation from the different-position items was smaller than the facilitation produced by same-position items (e.g. lanpostu ‘workplace’ as a prime for lanordu ‘working hour’), which suggests that the identification of the morphemes is at least somewhat sensitive to position. Similar evidence comes from Gagné et al.’s () finding that responses to a compound (e.g. finger nail) were faster when preceded by a prime that had a shared constituent in the same position (e.g. finger cymbals) than when preceded by a prime that had a shared constituent in a different position (e.g. ring finger). Further evidence that the processing of free morphemes might be partially position-specific comes from results showing that response times to compounds are only affected by morphological family members in the same position (e.g. Gagné and Spalding ). Morphological family members consist of all derived and compound words in the database that include the constituent. To illustrate, response times to doghouse were influenced by family members of the form dog + ___ and ___ + house (e.g. dogcatcher, boathouse), but not by family members of the form ___ + dog and house + ___ (e.g. bulldog, housework). De Jong et al. () also found an influence of positional family frequency (i.e. the frequency of all family members in which a constituent appears in the same position as the target compound) and positional family size (i.e. the number of members in which a constituent appears in the same position as the target compound) on the processing of Dutch and English compounds (see also Moscoso del Prado Martin et al.  for the role of family size and family frequency on Dutch and Hebrew complex words). In sum, the processing of stem morphemes in compounds is sensitive to the position of the morpheme, though not to the extent that is true of bound morphemes such as suffixes.

.. Summary and implications of empirical findings There is no consensus concerning either the nature or use of surface morphemes in language processing, and debate on these issues still continues. However, several aspects of the empirical findings point to the involvement of subunits in the processing of morphologically complex words. Evidence indicating a difference between morphologically related and only orthographically related primes suggests that these subunits are morphological. However, different classes of morphemes (i.e. affixes and free morphemes) differ in their sensitivity to position, though free morphemes also show sensitivity to previous usage in a particular position. Similarly, different classes of morphologically complex words (i.e. inflected, derived, and compound words) may differ in exactly how the morphological subunits contribute to processing. Although there is substantial evidence for decomposition, the empirical evidence is problematic for all single-route theories (i.e. both for theories that posit pure whole-word representation which suggest that morphemic representations do not affect lexical processing and for pure decomposition) because for the most part, the evidence suggests that both

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi



 . ´   . 

the whole-word frequencies and morpheme frequencies are involved simultaneously (e.g. Juhasz ; Kuperman et al. ). In addition, the finding that there are interactions between whole-word and constituent properties (Niswander-Klement and Pollatsek ; Baayen, Wurm, and Acock ) are problematic not only for single-route, but also for the dual-route models that posit two access routes that operate independently. Thus, at the present, the balance of evidence supports the multiple routes approach over the singleroute and independent dual-route approaches (though the direct mapping of form to meaning approach may also be able to account for the empirical evidence).

. A   

.................................................................................................................................. The question of whether morphemes should be considered to be the structural units of words remains central to psycholinguistic theories of word processing. Future work should focus on developing more explicit connections between specific morphological theories and psycholinguistic theories. Psycholinguistic theories are often not based directly on morphological theories, but rather on general principles such as the relative balance of storage versus computation. Psycholinguistic theories would benefit from being able to test specific predictions made by morphological theories in terms of which aspects of the structure of a language are most relevant to language processing. Similarly, psychological plausibility can act as a constraint for morphological theories. Most importantly, a distinction needs to be made between the properties of a language and the properties of the language user. Libben () has argued that many of the variables used in psycholinguistics are not properties of the words, but rather properties of the language user. He uses the example of lexical frequency, which at first would seem to be a property of the word. Libben argues, however, that this variable is actually a measure of the state of the person. In other words, lexical frequency can be thought of as a type of ‘longdistance practice effect’ (: ) that changes the mental representations of the individual rather than representing a characteristic of the language that exists apart from the person. As we have described in this chapter, multiple theoretical approaches have arisen and cover a broad range of options concerning morphemes, including the possibility that morphemes are epiphenomenal and, as such, arise out of orthographic and semantic associations. Further investigation is required to determine which of these theories provides the most viable account of language processing. This will be challenging because the existing data point to a system that is highly dynamic, flexible, and adaptable. The theories describe a much more stable system that, although capable of acquiring new words, does not alter any fundamental aspects of its structure or processing. Furthermore, the theories often assume that the effects discovered in experiments must be incorporated within the representation rather than within processing. For example, the presence or absence of a priming effect from a certain type of prime is often thought to indicate the presence or absence of a connective link within the mental lexicon between the prime and the target word. However, the dynamic nature of the language system suggests a greater need for the role of processing as a casual mechanism. Indeed, there has been an appeal for more, rather than less, processing within the language system (see Libben ; Kuperman ).

OUP CORRECTED PROOF – FINAL, 24/11/2018, SPi

   



We have suggested that there needs to be tighter integration between linguistic theories of morphology and psycholinguistic theories of word processing. However, given the discussion in the preceding paragraph, this might prove to be challenging. Traditional morphological theories have adopted a view in which smaller units (e.g. morphemes) are used to form larger units (e.g. words) via sets of rules or via analogy. However, the psycholinguistic evidence reveals that there is no strict sequencing to the time at which these units become available. Indeed, it appears to be the case that morphemes and the words that they form are available simultaneously. If morphemes are used to construct words, then one would assume that they would necessarily be available prior to the word. The fact that morphemes and the words that they form seem to be simultaneously available raises questions into the nature and purpose of morphemes. If they are not strictly used to form words during the course of everyday language use (e.g. reading or speaking), then what role do they play? Although there have been some attempts to address the additional roles of morphemes (e.g. Roelofs and Baayen  argue that they serve as planning units during the production of complex words), there remains much work to be done on this issue. Finally, both morphological theories and psycholinguistic theories need to recognize the importance of the fact that the language system is embedded within the cognitive system and as such must somehow be integrated with the systems that regulate other cognitive functions (e.g. attention, categorization, perception, and memory). Further work is needed to determine the extent to which word processing is affected by these other subsystems.

. F  Assink, Egbert M. H. and Dominiek Sandra (eds.). . Reading complex words: Cross-language studies. New York: Kluwer. Gaskell, Gareth (ed.). . The Oxford handbook of psycholinguistics. Oxford: Oxford University Press. Jarema, Gonia and Gary Libben (eds.). . The mental lexicon: Core perspectives. Amsterdam: Elsevier. Traxler, Matthew and Morton Ann Gernsbacher (eds.). . Handbook of psycholinguistics. San Diego, CA: Academic Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

    ......................................................................................................................

 .    . 

. W  ?

.................................................................................................................................. N is a research area bearing upon the relationship between the brain and language functions (Ingram ). In fact, the boundaries between psycho- and neurolinguistics are not sharp (Schiller )—both terms are used to describe scientific research on the relationship between linguistics, cognitive psychology, and the brain. Although all language functions ultimately reside in the brain, neurolinguistics rather than psycholinguistics emphasizes the neuroscientific aspect. For an overview of morphological theory and psycholinguistics we refer the reader to the chapter by Gagné and Spalding (Chapter  this volume). On the one hand, the term neurolinguistics is used to refer to research on language processing involving neuropsychological patients suffering from some sort of language disorder or impairment. Damage to many individual brain areas can result in language impairment. Spoken and written language (or gestures) can be independently affected, and also production and comprehension can be dissociated. Language impairment may result in different sorts of aphasias (Goodglass ), the best known being Broca’s and Wernicke’s aphasia; however, it has been suggested that these are rather coarse labels (e.g. Schwartz ) and that “we must develop a new, theoretically motivated typology of aphasia based on psycholinguistic principles” (Caramazza : ). On the other hand, the term neurolinguistics—rather than psycholinguistics—is used to indicate research on language processing that employs some sort of brain imaging or neural manipulation technique, ranging from electrophysiological (e.g. event-related brain potentials or ERPs) to hemodynamic (e.g. functional magnetic resonance imaging or fMRI) methods. In fact, neuroimaging research methodology is rapidly developing, and methods

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



such as positron emission tomography (PET), magneto-encephalography (MEG), nearinfrared spectroscopy (NIRS), transcranial magnetic stimulation (TMS), or transcranial direct current stimulation (tDCS) are widespread. Neuroimaging research may be carried out with patients, but is generally conducted with healthy participants. In fact, whenever the neurological substrate and its relation to language processing is at issue, as is the case with neuropsychological patients suffering from structural brain damage or with imaging methods measuring the function (or activity) of brain tissue, we deal with neurolinguistics. These two research traditions developed relatively independently of each other, with researchers publishing in different journals and presenting their work at different conferences. We will try to report work and relevant findings from both areas in this chapter, that is, from healthy speakers as well as language-impaired individuals. Some models of language processing, for instance on speech production, derive from the neuropsychological tradition (such as Caramazza’s Independent Network model; Caramazza ) whereas others derive from the tradition of neuroimaging (such as Indefrey and Levelt’s model of language production; Indefrey and Levelt ; Indefrey ; strongly influenced by Levelt, Roelofs, and Meyer ). Due to these differences in source data, models differ as well. In principle, however, all types of model should be able to account for different types of data. Regarding electrophysiological and hemodynamic data, we will mainly refer to ERP and fMRI work here. Electroencephalography (EEG), and derived from it ERPs, can measure brain activity—electrical currents produced by synaptic activity—with millisecond (ms) temporal resolution, while its spatial resolution is less fine-grained due to the inverse problem (Grech et al. ), but can be approximated with the help of electrical dipole modeling. ERPs consist of a number of components, negative (such as the N, ELAN, and LAN) or positive (such as the P) in polarity, which are characteristic for certain linguistic processing responses. For instance, the N, first described by Kutas and Hillyard (, ), is a voltage peak of negative polarity in the brain that reaches its amplitude maximum around  ms after the onset of the stimulus word. Every word yields an N component; however, when comparing a contextually appropriate with a non-appropriate word, the difference in N amplitude is referred to as the N effect (see also Figure .). While it was initially believed that the N is especially sensitive to semantic features of words, it is now thought that this component reflects the ease of integrating words into context. The P effect, initially also known as the syntactic positive shift (SPS; Hagoort, Brown, and Groothusen ), is a relatively late, syntaxrelated ERP component with positive polarity. It is observed as a consequence of violations of syntactic structures or preferences (so-called garden-path structures) and difficulty of syntactic integration (e.g. Kaan et al. ). The Early Left Anterior Negativity (ELAN) is another component with negative polarity, usually peaking between  and  ms, which is evoked by syntactic phrase structure violations (Neville et al. ; Friederici, Pfeifer, and Hahne ) and reflects highly automatic processes of initial structure processing. More interesting in light of the topic of the current chapter is the LAN (Left Anterior Negativity) component, which occurs somewhat later (i.e. between  and  ms) and reflects morpho-syntactic aspects of sentence processing, such as subject–verb agreement violations (Gunter, Stowe, and Mulder ; Penke et al. ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    . 

. N    

..................................................................................................................................

.. Morphological processing models Morphologically complex (as opposed to simplex) words are word forms that consist of more than one meaning-bearing element, that is, more than one morpheme. Morphologically complex word forms can be derived or inflected words, or they can be compounds. Derivational morphemes are affixes that are added to a simplex form to change its meaning (e.g. un + happy ! unhappy) or grammatical function by changing its syntactic word class (e.g. happy + ness ! happiness). Inflectional morphemes, in contrast, are affixes that do not change the meaning or syntactic word class of a word, but carry grammatical meaning and have the purpose to complete grammatical agreement (e.g. I buy a book vs. She buys [rd person singular ‑s] two books [plural ‑s]). A compound consists of more than one simplex morpheme (or stem), either of the same (e.g. paperback) or different syntactic word classes (e.g. hardcover). Important processing questions concern the way in which morphologically complex word forms such as books [book + plural s morpheme] or worked [work + past tense ed morpheme] are processed by our neurolinguistic system. How are complex words represented in the mental lexicon and how are they accessed, that is, as full forms (e.g. books, worked) or via their constituent morphemes (e.g. book + s, work + ed)? Psycholinguists came up with different answers to these questions. Most work has been carried out in the area of language comprehension (see Gagné and Spalding, Chapter  this volume). Butterworth (, ), for instance, proposed that complex words are listed as entire word forms (so-called full-listing models). For instance, morphologically related word forms such as work, works, worked, working, workable, worker, workaholic, homework, etc. are all fully listed and represented by separate entries in the lexicon. Morphology does not play a significant role in those models. However, the plausibility of full-listing models becomes questionable in the light of agglutinative languages, in which many affixes attach to the base morpheme to express syntactic or semantic properties (Waksler ). In contrast, other scholars have suggested separate access of individual morphemes, for instance, in compounds (so-called full-parsing or decompositional models; e.g. Rastle and Davis ; Taft and Forster , ; Taft ). That means that, for example, derivations such as workable may not be stored as holistic units. Instead, the individual morphemes work and able would be accessible to the processing system. Complex words would have to be decomposed into their constituents before the word stem could be accessed. This view is supported, for instance, by data from experiments manipulating frequency, that is, higher constituent frequency is associated with faster naming (Bien, Levelt, and Baayen ; see Janssen, Bi, and Caramazza  for contrasting results). Finally, dual-access models have been suggested, starting with Frauenfelder and Schreuder (), which postulate two distinct access routes to complex words, that is, a direct route which is followed, for instance, to access irregular past tense forms and an indirect route to access regular complex words and decompose them into their underlying constituent morphemes (Pinker ; Isel, Gunter, and Friederici ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



The production of morphologically complex words has been less investigated. In the language production model by Levelt and colleagues (Levelt, Roelofs, and Meyer ; but see also Caramazza ; Dell ), the encoding of meaning (conceptual-semantic processing) precedes the encoding of form (phonological-phonetic processing). However, models diverge when it comes to the exact time-course of information flow from conceptual preparation to phonological-phonetic encoding and finally the articulatory motor movements necessary to produce speech. Levelt’s model assumes that semantic concepts activate a number of lexical nodes; however, subsequently only one such node can be selected and further encoded at the phonological level. Whether morphologically complex words are stored and accessed as wholes is not completely clear. In fact, a decomposed representation of, for instance, compound words or inflected words would avoid a duplication of the representation of the constituents. In fact, there is some evidence for this position from production naming studies manipulating lexical frequency; naming latencies are predicted by the frequencies of the constituents but not the frequency of the compound (Bien, Levelt, and Baayen ). Additional evidence for a dual-route model comes from studies on the neurobiology of morphological processing. For instance, Leminen et al. () found in a combined EEG/MEG study that the processing of inflected words activated more strongly left superior/middle temporal cortices, whereas this left-hemispheric activity was not found for derived words. Derived words, in contrast, activated right superior temporal areas. Interestingly, a recent morphological priming ERP study on Spanish inflection and derivation reported electrophysiological differences for these two word types as well (Alvarez et al. ). Moreover, Bozic and Marslen-Wilson () argue that morphologically complex words created by rule-based combinations of morphemes such as inflected words (e.g. work-ed, jump-s) engage a leftlateralized fronto-temporal subsystem, specialized for grammatical computations. In contrast, lexicalized combinations such as found in derived words (e.g. brave-ly, warmth) engage a bilateral subsystem to access whole-word, stem-based lexical items. That is, the distinction between inflection and derivation may have a neurobiological processing correlate. As we will see, the processing of compounds may activate still other underlying neural areas.

.. Comprehension of morphology To allow for successful language production and communication, processing morphological structure plays an important role in day-to-day language use. For instance, verb inflections are interesting as they can be regular (walk > walked) or irregular (swear > swore). Regular and irregular verb inflections have been extensively studied since the late s until the present day with a particular interest in whether both are processed by similar or distinct systems in the brain. In the following, we present a selection of electrophysiological (EEG/ERP) as well as neuropsychological studies (with patients) which have investigated the comprehension processes in the brain related to morphology.

... Electrophysiological studies on morphological violations One way to investigate how morphological (de)composition in the brain takes place is to observe how the brain reacts when faced with uncommon situations. One often-used

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    . 

method to investigate this is the morphological violation paradigm (e.g. Penke et al. ; Rodriguez-Fornells et al. ). In this paradigm, correct and incorrect forms of particular morphological combinations (e.g. verbs plus their suffixes) are embedded into lists, sentences, or short stories, and by observing specific event-related brain potentials one can determine whether or not the brain considers particular combinations as violating morphological rules. To distinguish between different models of morphological processing, Penke et al. () employed the morphological violation paradigm to investigate how the brain responds to correct and incorrect forms. This study used both regular (ending in ‑t; such as getanzt ‘danced’) and irregular (ending in ‑en; such as geladen ‘loaded’) German participles. Participants were presented with correct and incorrect participle forms while recording their brain activity using electroencephalography (EEG). Penke et al. conjectured that if all morphological forms are simply stored, no differences should be found between violations for regular and irregular forms, that is, they should show similar event-related potentials (ERPs). Alternatively, if all forms are decomposed into their stem and affix regardless of their regularity, once again similar brain responses should be found for both regular and irregular violations. However, Penke et al.’s results showed that only incorrect irregular participles (e.g. *aufgeladet) produced a so-called LAN effect (a left fronto-temporal negativity) reflecting processes involved in morphological structure building and, remarkably, there was no difference observed for incorrect regular participles. Penke et al. therefore concluded that regularly inflected words are processed differently from irregularly inflected words. In other words, their results favor a dual-mechanism model in which regularly inflected words are decomposed into their stems and affixes and irregularly inflected words are processed by accessing full-form entries stored in the lexicon. Rodriguez-Fornells et al. () assessed the generalizability of Penke et al.’s () ERP results to Catalan (a Romance language). The advantage of studying Catalan is that verb stems in this language are further decomposable into a root and a thematic vowel (indicating conjugation class), simultaneously allowing for the study of stem formation and affixation during morphological encoding. This extends the scope from concatenation to stem alteration, thereby permitting generalizations across the functional role that particular ERP components (e.g. LAN, P, N) play during morphological encoding. By embedding correct and incorrect forms of stems and participles in short stories, Rodriguez-Fornells et al. () found left-lateralized negativities (i.e. LAN effects) for stem violations but not for incorrect participles. Conversely, a P effect was found for both violations (not obtained in German by Penke et al. ). They speculated that the absence of a LAN effect for incorrect participles might have its origin in the fact that the incorrect irregular participles used in the Catalan study had an incorrect stem and were therefore less obviously related to their accurate forms. Consequently, violations were less obvious in Catalan than in the German stimuli used in earlier studies. The occurrence of a P, however, was not surprising as the P is usually involved in the re-analysis of a whole sentence (as the comprehension task in the study required). According to RodriguezFornells et al., the absence of the P in Penke et al. () may have been due to the fact that either their analysis time window was too short (as the P is a late component), word-lists were used (avoiding re-analysis which typically evokes a P), and/or words were used at the end of the sentence (which typically elicit a positivity which could have masked the effect). Importantly, however, Rodriguez-Fornells et al. () concluded that

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



the LAN indeed selectively reflects processes involved in morpho-syntactic structure building and, corroborating Penke et al. (), they established that a dual-mechanism involving lexical memory for irregular items and rule-based processes for regular items seems to apply to both inflectional and stem-forming processes. Contrastingly, Smolka et al. (), using ERPs, reached a different conclusion. As stated earlier, previous research suggested that irregular and regular (past) tense for verbs supported the existence of two distinct systems, that is, a system which only stored the base (for regular inflected verbs) and another system storing the whole word form (for irregular inflected verbs). Smolka et al. (), however, proposed that in previous violation paradigm studies, as well as other (repetition priming) studies (e.g. Rodriguez-Fornells, Münte, and Clahsen ), there were several inconsistencies between the paradigms (i.e. patterns of dissimilar effects in violation paradigms but comparable effects in priming paradigms; see Smolka et al. : , Table ). Additionally, Smolka et al. () pointed out the existence of several studies demonstrating ‘graded’ brain responses depending on verb regularity (e.g. Justus et al. ) which would suggest a single-system account. To discern between a categorical (dual-system) or a more continuous single system involved in word processing, they reported data from a visual priming experiment using German in which participle formation was examined. Five conditions were constructed, () identity (lerne/lerne ‘(I) learn’); () participle (lerne/gelernt ‘(I) learn/learnt’); () semantic associate with the same inflection (lerne/büffle ‘(I) learn/(I) cram’); () semantic associate in participle form (lerne/gebüffelt ‘(I) learn/crammed’); and () unrelated (lerne/trockne ‘(I) learn/(I) dry’). The crucial manipulation concerned the participle condition for different targets. So, for a target such as backe ‘(I) bake’ the participle is gebacken (regular stem but an irregular suffix, i.e. semi-irregular), for a target such as trinke ‘(I) drink’ the participle is getrunken (both irregular stem and suffix, i.e. fully irregular). As a dichotomous system predicts similar effects regardless of the amount of irregularity, graded effects (manifested in, for instance, amplitude/topography or latency of the ERP) would be difficult to explain. Smolka et al. () indeed showed that behaviorally as well as in ERP data, graded patterns were dependent on verb regularity. That is, regular verbs produced the largest and most widely distributed effects, irregular verbs produced small and the least widely distributed effects, and semi-irregular verbs produced an effect and distribution in between regular and irregular verbs. These results argue against a dichotomous (regular/irregular) explanation and favor a continuous system for processing verbs in German.

... Neuropsychological studies on morphological violations Another way to assess how morphological processing in the brain takes place is by studying patients who have neurological impairments such as dyslexia or aphasia. One particular avenue of research concentrates on a condition known as deep dyslexia in which morphological errors are quite prominent (Coltheart, Patterson, and Marshall ). Deep dyslexia is an acquired disorder, which means that the patient suffering from the disorder was able to read normally before the brain trauma occurred. This disorder is usually characterized by having multiple reading difficulties. People having deep dyslexia usually have great difficulty processing non-words (e.g. they are unable to read *toble), function words (reading in instead of at), and would make frequent visual (reading whisk as wheel) and semantic errors (reading cousin instead of father). Importantly, they also show poor

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    . 

performance in reading morphologically complex words (e.g. reading worker instead of working). Importantly, the latter indicates that these patients still seem able to decompose words into their constituent morphemes (i.e. stem + affix) but have difficulties affixing particular (bound) morphemes (such as ‑y, ‑ness, ‑er, ‑ity, and ‑ing). Originally, affixation errors such as these were indeed seen as representing a separate component within the reading process which, when damaged, would yield morphological errors (Morton and Patterson ; Job and Sartori ). However, subsequent research speculated whether or not these errors were in fact semantic or visual in nature (Badecker and Caramazza ; Funnell ). Badecker and Caramazza (), for instance, argued that many errors which were defined as morphological could also be explained by examining the concreteness of words (concrete vs. abstract words). They concluded that it was difficult to settle the issue regarding whether there is a separate morphological level that was damaged or whether deficiencies were due to visual/semantic complications. Similarly, Funnell () investigated this issue by examining the imageability and frequency of both the intended words and the incorrectly read words. If affixation errors were genuinely morphological in nature, they should only be observed with truly affixed (e.g. worker) but not with pseudo-affixed words (e.g. corner) or embedded words (e.g. fall in fallacy). However, Funnell () found, for instance, that the word mastery would be read as master and the word salty would be read as salt. Although such errors would have previously been classified as morphemic errors, Funnell () stated that what the patient read, in fact also tended to be the most imageable words (i.e. both master and salt have a higher imageability than mastery and salty). Importantly, these errors also appeared in pseudo-suffixed words (e.g. treaty would be read as treat) and for embedded words (e.g. fallacy would be read as fall) for which patient JG would usually produce the (apparent) stem of a word. The difference in error rates between pseudo-affixed words, embedded words, and truly affixed words was— although numerically larger for truly affixed words than for the other categories—not statistically different. It was therefore concluded that morphological errors produced in reading aloud are likely caused by similar underlying reasons, such as imageability and word frequency, that constrain reading performance when processing non-affixed words (Funnell ). Additionally, it should be noticed that when faced with pseudo-affixed words, our processing system nevertheless tries to impose some form of morphological structure on them (e.g. Longtin, Segui, and Hallé ). However, Rastle, Tyler, and Marslen-Wilson () conjectured that particular aspects of Funnell’s () study would necessitate certain validations. For instance, the three groups (truly affixed, pseudo-affixed, and embedded words) were not matched for imageability and frequency between the (perceived) stem and the correct word. Consequently, Rastle and colleagues re-investigated this matter by using the case of a different deep dyslexic patient (DE). DE was a -year-old individual who had a motor accident when he was , which resulted in brain trauma and severe language disabilities as a consequence. Rastle, Tyler, and Marslen-Wilson () presented  genuinely suffixed (e.g. childish),  pseudo-suffixed (e.g. beaker), and  embedded words (e.g. addict) plus  filler words about which DE had to make a lexical decision. Importantly, the three groups (genuine, pseudo, embedded) were closely matched on aforementioned important factors such as frequency and imageability for both the whole word and the stem separately. DE was tested in two sessions and was shown to make numerous errors spread over semantic (e.g. lotion–cream), visual (e.g. haggle–haggis), and morphological (e.g. childish–child)

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



errors (and their combinations). In addition, he created various morphologically complex non-words (e.g. goddess–*godery). Contrasting with the previous results by Funnell (), the data by Rastle, Tyler, and Marslen-Wilson () demonstrated that the genuinely suffixed words yielded significantly more stem errors than the other conditions (i.e. pseudo-suffixed and embedded words). Therefore, they concluded that these particular errors were not simply a form of visual error (which would have included the addition or subtraction of letters to obtain a word higher in frequency or imageability) but rather reflect that the lexical system has a form of organization which takes into account the morphological structure of complex words.

... Electrophysiological studies on the comprehension of derived words When studying complex word derivations, scholars are typically interested in how particular words are parsed on the basis of other existing words (e.g. is loneliness parsed by accessing the word lonely, is it a separately stored representation?). A typical way of studying this is by using overt priming paradigms. This involves a particular (prime) word being shown and response time and accuracy to a subsequent target being measured. If the words share a morphological relationship, the response latencies are sped up for the target compared to when they do not. In this way, it has been shown that the derivation of a particular word depends on its semantic relationship with the base word. In other words, a word like casualty would not be accessed with the help of the target casual (e.g. not casual + ty) but the prime casually would be as it shares a semantic relationship and will need to access the base morpheme (the target), via casual + ly (see Tyler, Marslen-Wilson, and Waksler ). However, as response latencies represent the endpoint of the cognitive processes underlying them, electrophysiological measures allow for a peek inside what is happening before the response is made. For a comprehensive overview of ERP studies (between –) investigating complex word derivations see Smolka, Gondan, and Rösler (: Table ). Contrasting the previous results, Smolka, Gondan, and Rösler () investigated semantically compositional derivations using the EEG/ERP technique. In particular, these authors were interested in the time course of morpho-lexical processing for German verbs, particularly when different processing stages (e.g. phonological form/semantic/morphological processing) occur and how any interaction between stages would take place. In an overt visual priming experiment, ERPs were obtained for target verbs (e.g. sprechen ‘to speak’) which were preceded by purely semantically related verbs (reden ‘to talk’), morphologically and semantically related verbs (ansprechen ‘to address’), and morphologically related but semantically unrelated verbs (entsprechen ‘to match’), orthographically related verbs (sprengen ‘to blow’), and unrelated verbs (biegen ‘to bend’). Looking at the N (an ERP component occurring about – ms after target onset typically attenuated by a semantic relationship between prime and target), Smolka, Gondan, and Rösler () found that this component was strongly attenuated for semantically related verbs (reden–sprechen vs. biegen–sprechen; in line with previous studies), indicating automatic activation spreading through the semantic network. Additionally, semantically transparent derivations showed priming (e.g. ansprechen–sprechen vs. biegen–sprechen) but remarkably also semantically opaque derivations showed N

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    . 

attenuation (e.g. entsprechen–sprechen vs. biegen–sprechen). Moreover, Smolka, Gondan, and Rösler () reported that the N attenuation for opaque derivations was as strong as that for semantically transparent derivations contrasting earlier studies which did not obtain any priming for their opaque conditions (e.g. Kielar and Joanisse ). These findings indicate that the structure for German verbs refers to the base form irrespective of semantic composition. In other words, although entsprechen ‘to match’ is semantically unrelated to sprechen ‘to speak’, it does seem to access the latter verb as its base form (i.e. its constructed as ent + sprechen). This surprising ERP result awaits replication and verification but is nevertheless quite informative for the ongoing debate on how morphological derivations can be construed.

... Electrophysiological studies on compound comprehension Although a lot of attention has been drawn to the neural underpinnings of inflections and derivations, only few ERP studies report electrophysiological evidence concerning the comprehension of compound words. In one study, Koester et al. () carried out several experiments in which German compound words were auditorily presented while the EEG was recorded. In their first experiment, they manipulated the grammatical gender agreement between the determiner and the first and final constituent of compound words (which were the modifier and head, respectively) to create four conditions. For example, in () der Regentag ‘the rainy day’, the masculine determiner (der) is in agreement with both constituents (i.e. both are masculine). However, in () *der Reisfeld ‘the rice field’, they are not in agreement. For singular German compound words, the head establishes the correct determiner to be used (i.e. das). Therefore, in () der would be the incorrect determiner. Koester et al. () also manipulated the agreement between the determiner and the first constituent. For example, in () das Presseamt ‘the press office’, the determiner das is correct as it corresponds with the head (both neuter); however, it does not correspond with the modifier’s gender (feminine). Lastly, in () *das Nussbaum ‘the nut tree’, the determiner das is both incongruent with the gender of the modifier and the head. Although only the head is morpho-syntactically significant in German, both the head and the non-relevant modifiers elicited a left-anterior negativity (LAN-effect) in incongruent gender-determiner conditions (see also Koester, Gunter, and Wagner ). This finding, according to Koester and colleagues, clearly suggests that the internal morphological structure of German compound words is processed during auditory language comprehension. Additionally, they proposed that dual-route models most readily explain their findings (corroborating Penke et al.  and Rodriguez-Fornells et al. ). El Yagoubi et al. () reported a lexical decision study that investigated the processing of compound words (with a particular focus on headedness). In English (as in many Germanic languages), the headedness of compound words is quite regular and can typically be determined by a rule. However, in other languages, including Italian (the language used by El Yagoubi et al. ), compounds have irregular headedness, allowing for novel experimental ways to distinguish between models which investigate compound processing (i.e. full-listing, full-parsing, and dual models). In Italian, the head can be located in the initial part or the final part of a compound, for example, acquavite ‘brandy’ is left-headed (i.e. acqua ‘water’ is the head) and filobus ‘trolleybus’ is right-headed (i.e. bus is the head).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



In their experiments, El Yagoubi et al. () created four conditions: genuine compounds with either the head in the left- or the right-hand position (e.g. acquavite or filobus) and embedded (non-compound) words with an existing word embedded in the left-hand (salamandra ‘salamander’ with sala ‘hall’) or the right-hand position (accidente ‘accident’ with dente ‘tooth’). The non-words for the task were generated by swapping the two morphemes of a compound word or two sections of a non-compound word (e.g. filobus ! *busfilo; salamandra ! *mandrasala). Participants got a warning/fixation ( ms) after which a word or non-word appeared on the screen (maximally  s) to which they had to make a lexical decision by pushing a button. Each trial was followed by a  s inter-trial-interval before the next trial started. A continuous EEG signal was recorded from  electrodes on a head cap (following the / system). The results were as follows: first, behaviorally, genuine compounds were found to be processed differently than embedded words, with the former yielding longer reaction times and more errors. There was no behavioral effect of headedness. Secondly, concerning the EEG data, a larger N lexicality effect was obtained for embedded words (compared to compound words). The authors speculate that this may be due to the way they inverted the compound and embedded words’ constituents to form the non-words. In the case of compound words, the two constituents both still had a meaning (e.g. the non-word spadapesce was derived from pescespada ‘swordfish’ and both spada ‘sword’ and pesce ‘fish’ are lexical items) whereas in the embedded words, only one constituent was a lexical item (e.g. the non-word forosema was derived from semaforo ‘traffic lights’ but sema is not a lexical item). Next, a modulation of the components typically involved in morpho-syntactic processing (i.e. P and LAN) was found for compound words only (i.e. not for embedded words) which indicates that a morpho-syntactic representation of the constituents was formed. Finally, although there was no behavioral difference, right-headed Italian compound words yielded a larger posterior P effect. The authors speculated that as rightheaded compounds are marked (non-canonical), although grammatically correct, they might require increased attentional resources compared to the canonical (left-headed) order, which would be reflected in the P (as its amplitude is related to the extent of attention involved in processing the relevant stimuli; El Yagoubi et al. ). These results were interpreted to be against full-listing models and in favor of a dual-route processing model allowing access to both whole-word and constituent information when processing compound words.

... Neuropsychological studies on compound comprehension Besides studying the processing of compounds in the brain of healthy people using electroencephalography, others have studied this topic by investigating people diagnosed with aphasia. For instance, Semenza, Luzzatti, and Carabelli () sought to investigate whether compound words are parsed into their constituents during the course of lexical retrieval. Although earlier evidence from aphasic patients in Germanic languages already indicated that morphological information was obtainable while phonological information was not (Hittmair-Delazer et al. ), Semenza, Luzzatti, and Carabelli () argued that it was not clear whether Italian compounding would show the same patterns as German compounding because Italian may require more intricate processing steps. Specifically, what Hittmair-Delazer et al. () found is that when naming words, aphasic patients often substituted compound word targets with compound semantic

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    . 

paraphasias (e.g. Salzstreuer ‘salt shaker’; Zuckerdose ‘sugar jar’) and compound neologisms (e.g. Windmühle ‘windmill’; *Schneemühle ‘snow mill’) which suggests knowledge of the underlying compound structure. However, this was also true when producing opaque compounds (i.e. when it is impossible to derive phonology or morphology from a compound’s meaning, e.g. Schuhlöffel ‘shoehorn’, lit. ‘shoe-spoon’). In reply to these findings, Semenza, Luzzatti, and Carabelli (: ) stated that morphological rules used in constructing German compounds are so simple they could perhaps remain available to aphasics. Conversely, Italian compounds have a far less regular structure, for example, both endo- and extrocentric compounds exist with varying headedness. As a consequence, Semenza, Luzzatti, and Carabelli () investigated whether Italian compounds show different error patterns between aphasic subtypes. For instance, verb–noun type compounds (e.g. portamonete ‘purse’, lit. ‘carry coins’) would be especially worthwhile to investigate as patients suffering from the Broca’s aphasia subtype are known to omit verbs. Whether they would also omit verbs in verb–noun compounds is not known (Semenza, Luzzatti, and Carabelli : ). Moreover, it has been shown that patients suffering from Broca’s aphasia compared to other subtypes (such as Wernicke’s and anomic aphasia) have more difficulties in finding nouns to describe actions. To assess aphasics’ performance, Semenza, Luzzatti, and Carabelli presented the Italian version of the Aachener Aphasie Test (AAT; Luzzatti, Willmes, and De Bleser ) to eighty-three patients who were unambiguously diagnosed as having either the Broca, Wernicke, or anomic subtype. By studying the responses to the words presented in this test (particularly observing error patterns related to compound constituent substitutions and neologisms), Semenza, Luzzatti, and Carabelli () concluded that people who faced difficulties in retrieving compound words often did preserve morphological knowledge about the target words. Furthermore, knowledge pertaining to the specific type of compound (i.e. noun–noun, verb–noun, etc.) was also found to be preserved. According to Semenza, Luzzatti, and Carabelli (), this indicates the existence of a distinct stage of morphological processing in the brain that is different from phonology. Additionally, Broca’s aphasics (as opposed to the other groups) showed a much higher error rate for compounds containing a verbal constituent. As the compound itself was always a noun, this is a strong indicator that compounds are indeed construed according to their constituents. More recently, Marelli et al. () studied compound word processing by investigating a special group of dyslectics, namely those showing neglect dyslexia (ND). Patients diagnosed with ND usually show a lack of awareness of (and attention to) one side of a presented word. The most common reading errors for ND patients are usually omissions or grapheme substitutions in the neglected side of the word. Some patients simply omit the neglected part of the word (e.g. yellow becomes low) whereas others show preservation of word length and substitution of the neglected elements (e.g. yellow becomes pillow). As ND patients seem to be aware of higher-level properties of words such as the difference between non-words and words (Caramazza and Hillis ) as well as showing sensitivity to subword constituents’ frequencies (Arduino, Burani, and Vallar ), it seems not to be a purely peripherally, visually centered disorder. To shed light on the discussion regarding whether compound words are stored in their full form or decomposed into their constituents (or whether compound processing operates in a dual-route way), Marelli et al. () investigated patients having ND. They selected seven right-handed, right-hemisphere brain-lesioned patients suffering from left

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



visual neglect. This entails that the left constituent in compounds would be mostly neglected. After clinically assessing the extent of participants’ neglect, they were subsequently presented with words on a computer screen, which they had to read out loud (regardless of whether or not they were real words). Two sets of stimuli were created: one set contained  endocentric1 compound words split up into  left-headed (e.g. camposanto ‘graveyard’) and  right-headed (e.g. fotocopia ‘photocopy’) compound targets. The second set consisted of non-words, which were created by substituting the leftmost constituent of the existing compounds with an orthographically similar word (e.g. camposanto ‘graveyard’ would become lamposanto lit. ‘flash+holy’). Marelli and colleagues were interested in examining whether left- vs. right-headed compounds and existing vs. nonexisting compounds gave rise to diverging patterns of results. They found a significant effect of headedness, which indicates that participants were better able to read left-headed compounds than right-headed compounds (i.e. although they made many mistakes, words like camposanto were still read more accurately than words like fotocopia). This result indicates not only that constituents can be processed, even though they are in the neglected position but, importantly, that compounds’ constituents are indeed processed separately in the brain, and that there seems to be a difference between the processing of heads and modifiers. Additionally, they found a significant effect between real compounds and non-existing compounds, the latter eliciting more errors for the left-hand constituents than for existing compounds (i.e. the left constituent in words such as lamposanto showed more errors than left-hand parts in words such as camposanto). Lastly, in a post-hoc analysis investigating the effect of frequency on performance, Marelli et al. () found that for real compounds the higher the frequency of the left constituent, the higher the chance it was produced correctly (conversely, no effects for right constituents were found). Additionally, there were no effects of lexical variables on nonexisting compound words. The authors concluded that if no parsing of any kind were present (i.e. only full-form processing), then it would have been hard to find constituent effects, let alone frequency or headedness effects, for the left constituent. Additionally, left constituent effects only emerged if the constituent was part of a real compound word indicating a complex relationship between the compound as a whole and its constituents. As such, Marelli et al. () suggested that this pattern is in agreement with dual-route (e.g. Schreuder and Baayen ) or multi-route models (e.g. Kuperman et al. ) which suggest that both the whole compound word and its constituents play a role during language processing.

.. Production of morphology Morphological structure is likely to play a role in speech production as well, although models of language production have not provided a separate role for morphological processing for a long time. There is evidence from speech planning experiments demonstrating that information about the planning of upcoming morphemes yields larger advantages than pure form information (e.g. phonemes). For instance, when Roelofs () compared the naming latencies of word sets including an overlapping morpheme (for instance, 1

Endocentric here indicates that one of the two constituents is unambiguously the head.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    . 

bijnier, bijrol, bijvak, meaning ‘kidney’, ‘supporting act’, ‘subsidiary subject’) to a set of words with the same amount of phonological overlap (for instance, bijster, bijna, bijbel, meaning ‘loss’, ‘almost’, ‘bible’), he found a significantly larger facilitation effect for the former compared to the latter group when compared to a set of words without phonological overlap. This led Roelofs () to conclude that morphemes are planning units in the speech production process. Evidence from speech errors (a floor full of holes ! a hole full of floors or I carved a pumpkin ! I pumped a carven; taken from Fromkin ; or former US president George Bush’s infamous quote “they misunderestimated me”) supports this claim. Derivational and inflectional morphemes can easily strand, suggesting that derivational affixes and word stems may be stored separately. Importantly, the lexical representation of words may include information about their morphological structure (see Schiller and Verdonschot  for an overview). The work on derivation and inflection in the area of language production is mainly limited to studies on speech errors and aphasic patients. In the following, we will focus on work with on-line measures such as speech latencies, ERPs, and fMRI bold (blood oxygen-level dependent) responses regarding complex morpheme production. Relatively recently, Zwitserlood and her colleagues developed a new paradigm to investigate effects of morphemic structures in speech production (Zwitserlood, Bölte, and Dohmes , ; Dohmes, Zwitserlood, and Bölte ; Zwitserlood ). This paradigm was first tested in German, a language notorious for its morphological productivity and feared for its multi-morphemic compounds such as Rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz (lit. ‘meat-labeling-control-task-transition-law’). In their so-called long-lag priming procedure, a to-be-produced target picture (e.g. Ente ‘duck’) is preceded by a related or unrelated (control) prime word followed by a number of intervening trials (usually seven to ten). Zwitserlood and her collaborators tested several related priming conditions, that is, words that were morphologically related, either transparently (Wildente ‘wild duck’) or opaquely (Zeitungsente ‘false report’, lit. ‘newspaper duck’), or only phonologically but not morphologically related (Altersrente ‘pension’; ente in Altersrente is not a morpheme). Primes were presented visually on the screen, interspersed with filler words and pictures. On each trial, one stimulus was presented (either a word or a picture) and participants were asked to name each stimulus they saw on the screen as fast and as accurately as they could. The result was that target pictures (e.g. Ente) were named significantly faster when they were preceded by a morphologically related prime word (e.g. Zeitungsente–Ente) but not when preceded by a phonologically related word (e.g. Altersrente–Ente; see Dohmes, Zwitserlood, and Bölte ). This effect was independent of the position of the overlapping morpheme (initial vs. final; Zwitserlood, Bölte, and Dohmes ). Since the priming effect is not phonological (no priming from Altersrente to Ente despite the presence of a phonological relationship) nor semantic (priming from Zeitungsente to Ente despite the absence of a semantic relationship) in nature, the authors suggested that the facilitation arises at a level of word form representation at which the prime words and the pictures activate the same word form, that is morphemic, representation different from the semantic-conceptual level and the phonological level. One may argue that these studies do not really investigate the production of morphologically complex forms since all target forms are simplex nouns. However, in the course of the experiment all stimuli, whether target, prime, or filler, are produced by the participants. Therefore, complex word forms are produced as well. Nevertheless, it

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



may be desirable to replicate the experiment with morphologically complex targets in the future as well. Koester and Schiller () replicated and extended the effects found by Zwitserlood and colleagues in several recent studies carried out in Dutch. First, Koester and Schiller () replicated the morphological priming effect behaviorally with Dutch materials. In a first set of target pictures, targets such as ekster ‘magpie’ were preceded by semantically transparent (eksternest ‘magpie nest’) and opaque (eksteroog ‘corn’, lit. ‘magpie eye’) morphologically related prime words. Transparent and opaque primes facilitated the naming of target pictures when compared to unrelated primes. In a second set of target pictures (e.g. jas ‘coat’), primes were morphologically related (e.g. jaszak ‘coat pocket’) or phonologically but not morphologically, related (e.g. jasmijn ‘jasmine’). Opposed to unrelated control primes, the morphologically, related prime facilitated target picture naming. However, there was no long-lag phonological priming effect from jasmijn to jas. Transparent (eksternest) and opaque primes (eksteroog) yielded similar effects, and the position of the overlapping morpheme (modifier vs. head constituent) did not play a role, demonstrating that the facilitation effect is abstract to some extent. Furthermore, in the Koester and Schiller () study, not only behavioral but—in a separate session—also electrophysiological data from twenty-nine electrode sites were collected. Relative to a baseline ( ms pre-stimulus) the mean amplitude ERPs were calculated. This was done separately for each participant and each condition. The resulting mean amplitudes were evaluated in the time window between  and  ms post stimulus onset. These mean ERP amplitudes were significantly less negative (i.e. reduced) when picture naming was primed by transparent and opaque compounds. However, the ERP amplitude did not differ when comparing the transparent and opaque conditions (see also Figure .). Moreover, significantly less negative, i.e. reduced, ERP amplitudes were found when the transparent and the unrelated condition were compared for the second F3

C3

F4

AFz

–5 μV –200

C4 200 400 600 800 ms

10 CP3

CP4

CPz

unrelated

transparent

opaque

 .. Grand average ERPs (negativity plotted upwards) in Set  The semantically transparent (dashed line), the semantically opaque (dotted line) and the unrelated conditions (solid line) are plotted superimposed on each other. ERPs are time-locked to the onset of the presentation of the picture. Source: Koester and Schiller ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    .  F3

AFz

F4

–5 μV

C3 –200

C4 200 400

600 800 ms

10 CP3

CPz

unrelated

CP4

transparent

form related

 .. Grand average ERPs, superimposed for the morphologically related (dashed line: semantically transparent), the form overlap (dotted line), and the unrelated condition (solid line) in Set  The ERPs are time-locked to the onset of picture presentation, and negativity is plotted upwards. Source: Koester and Schiller ().

set of pictures. In contrast, as shown in Figure ., the form-related condition did not differ from the unrelated condition in that set. However, compared to the form-related condition, the transparent condition elicited less negative ERP amplitudes. Therefore, the pattern of behavioral responses was replicated by the ERP results. ERP amplitudes were consistently reduced between  and  ms after picture onset, most visibly at posterior scalp regions, when a morphologically related compound word (transparent or opaque) primed the naming of pictures, but not when picture naming was preceded by words that were merely form-related. Koester and Schiller () proposed that this reduced negativity could be the reflection of an N effect because McKinnon, Allen, and Oosterhout () demonstrated the sensitivity of the N effect to morphological processing in language processing. The time course of these ERP effects agrees with estimates for morphological encoding during word production (Indefrey and Levelt ; Indefrey ). In contrast, semantic and/or conceptual processing begins around  ms after the presentation of a to-be-named picture. Once a lemma has been selected (around  ms after picture onset), the first process in word form encoding is morphological encoding, beginning about  ms after picture presentation (Indefrey and Levelt ). In the present study, the onset of the N effect is similar to the estimated onset of morphological encoding (i.e.  ms). Indefrey and Levelt () assume a response latency of  ms; however, the mean response latencies in Koester and Schiller’s study are around  ms. Accordingly, the onset of morphological encoding may be somewhat later (approximately  ms after picture onset), which is very close to the observed onsets of the N effects found in Koester and Schiller (). Therefore, the hypothesis that morphological priming during picture naming originates at a relatively late stage, namely during morphological encoding, is supported by the N

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



effects. It seems that morphological priming effects can be located at the word form level (Indefrey and Levelt ). A closer look at the scalp distribution of the N effects demonstrates that the two sets of stimuli in the transparent priming conditions differ in the Koester and Schiller () study. Presumably, different subsets of materials may have resulted in different morphological priming. A more recent study by Koester and Schiller () employed the current experimental design with another methodology, that is, functional magnetic resonance imaging (fMRI). The aim of that study was to determine more directly the neural substrate of the morphological priming effect in overt language production. It has been suggested that N effects may be sensitive to morphological processing in comprehension tasks, such as visual word recognition (McKinnon, Allen, and Osterhout ). However, N effects have not previously been reported for morphological processing in overt picture naming studies. The amplitude of the N in visual word processing is reduced for related prime-target pairs compared to unrelated pairs. Jescheniak et al. (), for instance, used ERPs to investigate priming effects of implicit picture naming (covert preparation) on subsequent auditory word comprehension. Picture names that were semantically and phonologically related to the auditorily presented words resulted in less negative ERP amplitudes relative to unrelated picture–word pairs. These results demonstrate that the activation of semantic and phonological representations during the preparation of a picture name can be assessed by the influence of the activated information on subsequent word comprehension. Similarly, the current experiment demonstrates that processes in overt language production can be investigated with ERPs directly and reliably. Koester and Schiller’s () results are robust and have been replicated in three studies so far. First, Verdonschot et al. () investigated the question whether switching to another language before naming the target would interfere with the morphological priming effect. Bilingual Dutch–English participants named pictures preceded by a prime compound word in Dutch. Intervening filler items (words and pictures) were named either in Dutch (non-switch condition) or English (switch condition). If participants reactively inhibit the non-target language, one would predict longer naming latencies for the target pictures in the switch compared to the non-switch condition and a decreased morphological priming effect. However, morphological priming effects in the switch condition were of a similar magnitude as in the non-switch condition. Furthermore, both opaque and transparent compounds facilitated the naming of morphologically related target pictures, replicating previous findings in Dutch and German. Second, Lensink, Verdonschot, and Schiller () extended the Verdonschot et al. () study to L production, that is, Dutch–English bilinguals naming target pictures in their L, namely English, either in non-switch blocks (no intervening Dutch trials) or in switch blocks (including Dutch filler trials). Reaction times mirrored the effects of Verdonschot et al. () very closely. Again, there were strong morphological priming effects in both switch and non-switch conditions and no significant difference in magnitude between transparent and opaque prime-target pairs. Therefore, Lensink, Verdonschot, and Schiller () replicated previous studies in yet another language: English. Furthermore, they obtained reduced N effects in morphologically related conditions compared to an unrelated condition, however, only in the non-switch blocks. Presumably, participants applied a post-lexical checking strategy in the switch blocks, perhaps because the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 .    . 

morphological relation between English prime and target was emphasized through the Dutch trials; this may have resulted in decreased N effects due to, for instance, better predictability of targets. Third, Kaczer et al. () extended earlier studies to novel Dutch compounds. Participants learned novel compound words, formed through the combination of two existing morphemes (e.g. appel + gezicht lit. ‘apple face’), in a first session. Novel and familiar (e.g. appelmoes ‘apple sauce’) compounds were used as primes in a long-lag priming paradigm for morphologically related target pictures (e.g. appel). A second session was recorded forty-eight hours after the first to investigate the effects of memory consolidation for the novel compounds. On a behavioral level, novel compounds initially showed a stronger priming effect than familiar compounds. This advantage was also present in simultaneously acquired EEG data, that is, a decreased N effect in morphologically related conditions compared to unrelated conditions, but the difference vanished two days after learning. This result may suggest that the novel compounds are initially processed as separate constituents. The change of the pattern after two days could reflect the consequence of a memory consolidation process that may help to assemble two initially separate words into a single unit. Therefore, the distinction between decomposition of the compound word and full parsing could depend on the integration of the novel compounds into the mental lexicon. Alternatively, the novel compounds may cause an increase in the attentional resources needed for reading aloud, which could have contributed to a more effective decomposition of their constituents. Methodologically speaking, the exclusion of trials and participants due to (eye) movement artifacts is a major issue when employing ERPs to overt language production tasks. Relatively strong ERP components such as the error-related negativity (ERN) may suffer less when the number of trials is reduced (Falkenstein et al. ; Ganushchak and Schiller , ). The present overt picture naming study demonstrates that even less strong ERP components can be detected reliably (see also Christoffels, Firk, and Schiller ; Ganushchak, Christoffels, and Schiller ; Timmer and Schiller ). In a following step, Koester and Schiller () aimed to investigate the neuro-anatomical correlates of morphological processing. Indefrey () (see also Indefrey and Levelt ) investigated the brain areas that are associated with different processing stages in language production. On the basis of this meta-analysis, Indefrey and Levelt () localized phonological code retrieval in the left posterior superior and middle temporal gyri. One may predict morphological priming to affect neural activity in the left posterior superior and middle temporal gyri (MTG) if morphological information affects phonological code retrieval. Previous studies investigating language production examined several inflectional mechanisms such as plural formation of nouns or first and third person verb generation (e.g. Jaeger et al. ; Beretta et al. ; Joanisse and Seidenberg ). Results of these studies are often unspecific as to whether they reflect processes of comprehension or production because linguistic stimuli were presented to elicit a verbal response. That is why comprehension and production processes are difficult or impossible to disentangle. Other neuroimaging studies on language production, that is, studies that avoided influences from comprehension processes, did not investigate morphological processing (e.g. De Zubicaray and McMahon ; Kan and Thompson-Schill ). In their own study, Koester and Schiller () investigated the neurocognitive correlates of morphological processing in the human brain by employing a long-lag priming

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

   



paradigm. The paradigm was very similar to the one used in the ERP study reported in Koester and Schiller (). Participants were requested to read prime words, that is, compounds, aloud and seven to ten trials later, they overtly named picture targets. During a given trial, only one stimulus—a word or a picture—is presented on the screen. Therefore, target picture naming does not coincide with reading aloud the primes. The long-lag priming paradigm has been shown to be sensitive to morphological, but not semantic or phonological, relations between primes and targets (Feldman ; Zwitserlood, Bölte, and Dohmes ). Behavioral analyses revealed that morphologically related compound words facilitated picture naming. Just as in previous research, semantically transparent and opaque conditions did not differ, and the form-related condition did not produce a facilitation effect. Overall, this data pattern is very similar to previous morphological priming effects in the production of compound words (Dohmes, Zwitserlood, and Bölte ; Koester and Schiller ; Zwitserlood, Bölte, and Dohmes ). On a neurocognitive level, Koester and Schiller () found in a conjunction analysis, that is, taking into account activations specific to both transparent and opaque primes, that morphological priming effects are related to specific neural activity in the left inferior frontal gyrus (LIFG), specifically Brodmann area  (Figure .). Morphological priming in picture naming led to increased neural activity in that area. This result underlines the functional importance of LIFG for morphological processing in language production and it contributes to the understanding of an elementary mechanism of word formation, that is compound processing. Thus, these results support the prediction for LIFG but not for the left posterior MTG. In summary, Koester and Schiller () used fMRI to investigate the processing of morphological information in speaking. Morphological priming in picture naming led to increased neural activity in LIFG (BA ). It may be speculated that increased neural activity in this area may be responsible for the decreased N effect in the ERP studies reported in this section, possibly indicating less processing or integration effort. This result underlines the functional importance of LIFG to word form encoding and for morphological processing in language production and calls for further investigations of the neural

 .. Surface rendering of regions activated by transparent and opaque priming conditions in Set 

Conjunction analysis; p.

== $abc == g .

(19)

VALERE: == 2C_VERB == Morpholex-paradigm-1 == v a l +.

To prevent velar insertion from applying across-the-board, its application is constrained by invoking it from within each lexical entry subject to velar insertion. This is enforced through ‘Morpholex-paradigm-’ (), which provides information of where in the paradigm velar insertion applies (‘B’). As a result, the new entry for  () contains one stem only.

.. Discussion Roark and Sproat () correctly point out that DATR does not add to the formal power of finite state approaches. Careful consideration of how DATR describes a fragment of the Italian conjugation helps appreciate the difference between low-level computer operations and a high-level description of linguistic facts. It is no accident that DATR provides such a compact way to describe both regular and irregular Italian verbs. Modelling morphotactic constraints in terms of hierarchically arranged patterns of stem distribution effectively constrains the expressive power of our computational model, and captures an important

2

For more complex cases, see Pirrelli and Battista ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

descriptive generalization: less regular paradigms, that is, paradigms containing more stem alternants, can be derived from regular paradigms by monotonically overriding default stem patterns. In the same model, stem alternants are treated as intermediate ‘morphomic’ representations (Aronoff ). In fact, each allomorph is assigned an index ‘Bi’, which is in turn selected by a set of paradigm cells in (). Its index is not directly associated with velar insertion as such. Velar insertion requires an independent statement () where no index appears, in compliance with Stump’s Indexing Autonomy Hypothesis (Stump ). DATR lexical entries appear to accommodate the desiderata of a number of theoretical frameworks: from early Lexicalism (Halle ; Jackendoff ; Aronoff ; Scalise ; Lieber ; Montermini, Chapter  this volume) to more recent construction-based and usage-based models (Bybee ; Tomasello ; Booij a; Masini and Audring, Chapter  this volume), going through Corbett’s Network Morphology (Corbett and Fraser ; Brown, Chapter  this volume) and Hay and colleagues’ work (Hay ; Hay and Baayen ; Hay and Plag ). Some may keep thinking it useful and conceptually desirable to draw a line between lexical units that are actually listed, and units that are computed on-line through general morphological statements. It soon gets very difficult, however, to comply with such a principled distinction when it comes to the description of inflectional systems of average complexity. So far, we considered word processing issues as computational solutions to the problem of providing optimal descriptive generalizations. Alternatively, we could look for the most effective strategies for learning generalizations from input data. The area of Machine learning in Computational Morphology provides some tools to address this issue.

. M   

.................................................................................................................................. Morphological induction can be defined as the task of singling out morphological formatives from fully inflected word forms. To linguists, the task is reminiscent of Zellig Harris’ empiricist goal of developing linguistic analyses on the basis of purely formal, algorithmic manipulations of raw input data: so called ‘discovery procedures’ (Harris ). Absence of classificatory information (e.g. morpho-syntactic or lexical information) in the data qualifies the discovery algorithm as unsupervised. A different conceptualization of morphological induction sees the task as a classification problem. The machine classifier is trained on a set of forms whose classification or mutual relation is already known, and is tested upon the ability to assign the correct class (or the appropriate relationship) to word forms that were not part of the training set. In this case, the learning regime is said to be supervised.

.. Supervised learning Algorithms for supervised learning require a pool of pre-classified data (e.g. word forms with their morphological segmentation), from which correlations between data and their classes can be inferred. The most popular families of supervised morphological classifiers are based on principles of memory-based learning and stochastic modelling techniques.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



... Memory-based learning According to memory-based learning (Daelemans and van den Bosch ), speakers do not abstract away from experience, but classify novel input by analogy to stored exemplars. In modelling inflectional morphology, for example, a memory-based learner assumes that morphological processing is a function of either lexical retrieval or similarity-based reasoning on lexical representations. Computation of similarity is defined on the basis of phonological, orthographical, or semantic representational features. Keuleers and Daelemans () describe the application of memory-based learning to the task of predicting the plural suffix for a Dutch noun, given the noun’s singular form, and a set of Dutch words whose plural suffix is already known. Table . shows a fragment of the knowledge base for Dutch nouns, pre-classified for plural formation. Each row in the table represents one word form. Features are given in columns. For a novel input noun to be pluralized, its input features are matched onto known exemplars. To ensure that only features associated with aligned symbols (letters or phonological segments) are matched, exemplar representations must be aligned preliminarily. In Table ., alignment is enforced by associating exemplars with a syllabic template consisting of three slots: onset, nucleus, and coda (‘O’, ‘N’, and ‘C’ in the Table . headers). The similarity between two exemplars X and Y is measured according to the following weighted overlapping distance: () ΔðX; YÞ ¼

Pn

i¼1 wi

 δðxi ; yi Þ

where xi and yi indicate the values on the ith matching feature Fi in X and Y respectively, δ(xi,yi) measures the distance between them, and wi is the weight associated with Fi, estimating how important the feature Fi is in assigning a class a ∈ A to an exemplar. The most straightforward decision rule is to base the class of a new exemplar on the class of the exemplar(s) at the nearest distance Δ. In Dutch pluralization, good accuracy scores are observed (best simulation accuracy . per cent).

Table .. Fragment of a knowledge base for Dutch noun plural inflection, showing the words /ˈhɔnt/ ‘dog’, /ˈzɑntstɔrm/ ‘sandstorm’, and /kapiːˈt«in/ ‘captain’ O N C O N

C

P-S

F-S

F-G

pl

=

=

=

h

ɔ

nt



+

d

-en

z

α

nt st

ɔ

rm

+



m

-en

p



t

εi

n





n

-s

Notes: For each word, only the two final syllables are represented, with ‘=’ being used to pad up words with fewer than two syllables. The last four columns in each example indicate (from left to right): stress on the pre-final syllable (‘P-S’), stress on the final syllable (‘F-S’), the final grapheme of the word (‘F-G’), and the plural class (‘pl’) associated with the word. Source: adapted from Keuleers and Daelemans ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

The time-honoured notion of proportional analogy (Paul ; de Saussure ; Kuryłowicz ; Greenberg ; Anttila ) models word similarity as a relation among four multi-dimensional lexical exemplars: ()

steal:stealer=cheat:cheater - : - = - : -

with steal-stealer and cheat-cheater being members of the same word family, and stealcheat, stealer-cheater representing pairs of categorially related members of different word families. In (), the analogical proportion holds for four complex lexical representations, each of which consists of a letter string (e.g. steal) paired with a categorial label (‘’). For () to hold, both proportions must obtain. More formally, Yvon and Pirrelli () define an analogical proportion among strings of symbols in terms of identity over (pairs of) sub-strings, as follows: () ða1 ¼ u  vÞ ∧ ða2 ¼ u  wÞ ∧ ða3 ¼ t  vÞ ∧ ða4 ¼ t  wÞ where ‘x  y’ means ‘x’ concatenated with ‘y’. Accordingly, the following equations hold: u = steal, w = er, t = cheat, v = ε, with ‘ε’ representing the empty string. Analogical proportions can be used for guessing the morphological structure of an unknown a. In this context, morphological generalization is interpreted as the task of finding two substrings t and w such that there exists at least one analogical proportion where a ¼ t  w. The strength of a proportional analogy is based on the number of independent proportions supporting a ¼ t  w. This decision function is fairly conservative and requires that the relation between a and a be attested by at least two other words in the lexicon. Nonetheless, Yvon and Pirrelli () report . per cent accuracy of proportional analogy on the task of predicting the root and part-of-speech of an English derivative based on the CELEX English dictionary (Baayen, Piepenbrock, and Gulikers ). For nearly any derivative d, no matter how far removed from its root, there always exists in the lexicon another entry with the same derivational history, such that d’s root can be analogized proportionally.

... Stochastic modelling Stochastic classification algorithms model the problem of finding a morph m (say ‑ed past tense) in a word w (say walked) as the task of estimating the probability of having the category m assigned to w, given a representation of w in terms of a few linguistic features fi. Features are chosen by the analyst and range from the position of the candidate morph in the embedding word, to the length of the constituent, existence in a dictionary of morphemes, grammatical category of adjacent morphemes, etc. (Uchimoto, Satoshi, and Hitoshi ; De Pauw and Wagacha ). A Maximum Entropy classifier (Berger, Della Pietra, and Della Pietra ; Ratnaparkhi ) solves the problem by calculating, for each category m, the conditional probability p(m|w) of having m associated with w, given (i) an appropriate feature-based recoding of w and (ii) the overall entropy of the resulting probability distribution being maximized. The latter constraint suggests that the probabilistic model should not be more biased than required by input evidence, that is, by the knowledge base of attested evidence. By maximizing the entropy of a probabilistic

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



distribution we achieve this result: minimize its distance from the equiprobable distribution, and avoid being too biased by a skewed data sample. Stochastic classifiers have been successfully used in Noun–Noun compound interpretation. The problem is to find the appropriate semantic relation linking the compound head (e.g. ring in silver ring) with the non-head constituent (silver). Seminal work in the s and early s (Finin ; McDonald ; Gay and Croft ; Vanderwende ) addressed the problem through knowledge intensive algorithms, on the assumption that compound interpretation requires knowledge of the conceptual structure of the head (Wisniewski ). More recently, semantic relations have been conceptualized as context-driven functions that operate during conceptual combination. The idea permeates much psycholinguistic and cognitive work on compound interpretation (Shoben ; Gagné and Shoben ; Gagné ), suggesting that speakers use knowledge of how nouns have previously been combined to interpret novel combinations. Ó Séaghdha (, ) successfully tests the hypothesis that if a compound like silver necklace is classified as a BE compound (see Ó Séaghdha  for details of the classification), a compound like platinum ring can be classified as a BE compound on the basis of the semantic proximity between silver and platinum, and necklace and ring respectively. Distributional Semantics models (Sahlgren ; Padó and Lapata ; Yarlett ; Baroni and Lenci ) provide accurate estimates of inter-word similarity through corpus data. This involves recording the patterns of co-occurrence of—say—ring and necklace with other words in a corpus, and measuring how similar the resulting co-occurrence profiles (or distributional context vectors) are. The intuition is that semantically similar words like necklace and ring will share similar contexts. More sophisticated variants of Ó Séaghdha’s approach have been proposed in the literature (Verhoeven et al. ). In particular, Dima and Hinrichs () use so-called word embeddings (Bengio et al. ; Collobert et al. ; Mikolov et al. ) for computing word similarity. Word embeddings are a neural-network variant of distributional context vectors. Like the latter, they provide a distributed real-valued representation of the average use of a word in real corpora, so that words that are either syntactically or semantically similar are associated with vectors that lie close together in the vector space. Unlike distributional vectors, which are based on co-occurrence statistics, word embeddings are obtained by training a deep neural network (Zhang and LeCun ) on the task of classifying naturally occurring word sequences.

... Rule induction Albright () addresses the problem of morphological induction by applying the Minimal Generalization algorithm (Pinker and Prince ; Albright and Hayes ) to the acquisition of Italian conjugation patterns. The algorithm consists in aligning lexical entailments between pairs of inflected verb forms that stand in a specific morphological relation. For example, two forms such as bado ‘I look after’ and badare ‘to look after’ stand in a -_- !  relation. Given any two entailments sharing the same relation, the goal is to extract from them a maximally specific context-sensitive rule mapping one class of forms onto the other class (Table .). Albright shows that inferred rules apply accurately, and their reliability score (based on the number of forms for which the mapping rule makes the right prediction) correlates with human subjects’ acceptability judgement on nonce-forms.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Table .. An example of the alignment of lexical entailments for the Italian word pairs bado-badare (‘I look after’, ‘to look after’) and dirado-diradare (‘I thin’, ‘to thin’) context residue lex entailment  lex entailment  mapping rule

di X

change

shared features

shared segments

b r

ad ad ad

rule’s structure environment

o ! are o ! are o ! are rule’s structure change

... Connectionist approaches Classical connectionist networks (or perceptrons) are layered arrangements of fully connected processing nodes. Nodes are neurally inspired parallel processing units that integrate input signals and fire output signals as a function of their level of activation. Each connection transmits signals between connected nodes in proportion to an adaptive conductivity parameter, or weight of the connection, which is dynamically modified in the course of learning. Early connectionist networks were the champions of the associative view on morphological competence in the s. In fact, they modelled inflection, somewhat traditionally, as the derivational task of mapping a base form (e.g. go), vector-coded on the input layer, onto an inflected form (went), vector-coded on the output layer (Rumelhart and McClelland ). Input–output mapping is learned by adjusting the weights of the connections that go from the input to the output layer (either directly or through a hidden layer). Weight adjustment is driven by back-propagation of the error from the output layer. This consists in altering the weights of connections emanating from an activated input node, for the level of activation of output nodes to be attuned to the expected output. Connections between the jth input node and the ith output node are changed in proportion ^i of the ith output node and the to the difference between the target activation value h actually observed output value hi, as illustrated in (): ^ i  hi Þxj () Δwi;j ¼ γðh where wi;j is the weight of the connection between the jth input node and the ith output node, γ measures the network plasticity, and xj is the activation of the jth input node. If xj is null, the resulting change Δwi;j is null. In other words, back-propagation never alters the weights of connections emanating from an input node, if the node is never activated in training. An important consequence of () is that a universal identity relationship, such as the one required by a simple rule like past(X) ! X+ed, can be learned by a (multi-layered) perceptron only if it is shown for all the values taken by X. This limitation is formally reminiscent of FSTs being incapable of dealing with unbounded copying (}.).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



... Discussion The most innovative feature of traditional connectionist approaches is that they reject the idea that morphological representations and processing are dichotomized. Network nodes are, in fact, at the same time processing units and dynamic (trainable) memories. However, self-correction can only take place when an error is signalled in the output and the expected output is shown. Although psycholinguists have occasionally speculated about internal feedback (self-correction) as a way for a child to backtrack from mistakes, it remains unclear what the source of the error signal can be. Children are seldom corrected upon making errors, and even when they are, they do not take much notice of it. Not all supervised learning algorithms implement the idea of supervision as correction. Memory-based learning (}...) makes the assumption that classified items are learned instantaneously and memory access is never faulty. It is nonetheless reasonable to hypothesize that children can learn to select between single and plural forms due to their ability to discriminate between items in terms of their numerosity (Ramscar and Yarlett ). Proportional analogies (}...) require that the search for formal redundancies be contingent upon shared categorial representations (see example () above). Nonetheless, they do not define a direct inter-level mapping. For any pair of proportionally related lexical items, a relationship must independently hold within each proportion. This principle enforces an indirect way of mapping form onto meaning, where phonological contrast and functional opposition are governed by autonomous principles. Proportional analogy thus illustrates a radically abstractive approach to morphology as an adaptive, discriminative system (Ramscar and Yarlett ; Ramscar and Dye ; Baayen et al. ; Blevins ). Stochastic models (}...) are compatible with a view of morphological processing as the result of dynamic, on-line resolution of possibly conflicting grammatical constraints. An implication of this view is that constraints are possibly fuzzy (non deterministic), and the resulting generalizations are smoothed by maximizing uncertainty (i.e. entropy). There is one problem that all supervised approaches reviewed so far, however, fail to address: adaptivity to morphological structure. Machine learning algorithms make specific, a priori assumptions about word representations. For most European languages, a fixed-length vector representation can be construed by right-aligning input words, since inflection in those languages typically involves suffixation and sensitivity to morpheme boundaries (see Tables . and .). In connectionist models, input representations do not code morphological structure explicitly, but input forms are aligned for recurrent morphological formatives to activate overlapping nodes on the input layer. Alignment is enforced by using either fixed-length positional templates (similar to the onset-nucleus-coda template of Table ., see Plunkett and Juola ) or so-called conjunctive coding, tying symbol representations to specific positions in the input word or specific letter clusters (Coltheart et al. ; Harm and Seidenberg ; McClelland and Rumelhart ; Plaut et al. ). All these coding strategies presuppose considerable knowledge of the morphology of the target language. Algorithms for morphology learning should be more valued for their capacity to adapt themselves to the morphological structure of the target language, rather than for the bias of their input representations. To alleviate this bias, semi-supervised learning algorithms have recently been proposed (e.g. Ahlberg, Forsberg, and Hulden ) that extract general morphological patterns

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

(called ‘abstract paradigms’) with considerably fewer language-specific assumptions. For example, a linguistically sensible English abstract paradigm like x, x + s, x + ed, x + ing is obtained from the set of paradigmatically related forms walk, walks, walked, walking, by singling out the longest common subsequence shared by all of them and replacing it with a variable (x = walk). More complex abstract paradigms may require more string variables, as with x+a+x+a+x+tu, x+a+x+a+x+ta, x+a+x+a+x+a representing the Arabic forms katabtu ‘I wrote’, katabta ‘you wrote’ (masculine singular), and kataba ‘he wrote’ respectively (with x = k, x = t, and x = b). Note that, with the exception of a few heuristic tie-breaking techniques, the string-matching algorithm is fairly general and languageindependent, and it only requires that forms be presented in paradigms.

.. Unsupervised learning ... Minimum Description Length In an information-theoretic adaptation of Harris’ () ideas, Goldsmith (b, ) formalizes the distributional hypothesis within the Minimum Description Length framework (MDL, Rissanen ). The task of morphological induction is modelled as a data compression problem: find the battery of inflectional markers forming the shortest grammar that best fits the empirical evidence. For Goldsmith, a grammar is a set of ‘signatures’, and a signature is a list of affixes selected by a set of stems. For example, the list is a signature for the stems count, drink, mail, and sing. The observed evidence is a text corpus. The task is a top-down global optimization problem and boils down to a grammar evaluation procedure. Given a set of candidate affixes, their probability distribution in a corpus and their partitioning into signatures, MDL requires minimization of: (a) the length of the grammar (in terms of number and size of its signatures); (b) the length of the corpus generated by the grammar (i.e. the set of inflected forms licensed by the grammar according to a specific probability distribution). MDL disfavours two descriptive extremes: (i) a verbose, photographic model of the corpus, where each word form makes a paradigm of its own; and (ii) a very compact but unconstrained model, with one overall paradigm only, where any verb can combine with any affix according to the product of their independent probability distributions. The first model is equivalent to a list of words with their (relative) frequency. The second model wildly overgenerates, outputting *goed for went, *stricked for struck, *bes for is, etc. Goldsmith gives criteria for weeding and merging signatures, and evaluating the resulting grammar iteratively. However, the two steps of the algorithm (morphemic splitting and grammar evaluation) make no contact. There is no way to dynamically adjust splitting criteria to the morphological structure of an input language.

... Features and classes In ‘features-and-classes’ models (Hammarström and Borin ), a word is represented as a set of redundantly specified features (n-grams), which have no internal structure and may be order-independent (Mayfield and McNamee ; McNamee and Mayfield ; De Pauw and Wagacha ). For example, a word like library can be represented as a set of trigrams (where ‘#’ is a word boundary marker). A majority of features in any such list may not be morphologically relevant, but some of them are. In the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



end, irrelevant features will have no discriminative power, as they are associated with too many and diverse lexical units. More specific features will, however, be anchored to morphological information and trigger morphological generalizations. The general idea is that we consider each set of n-grams as forming a class of its own. We can then use a standard maximum entropy classifier (}...) to estimate how well each feature predicts its own class, and then use the classifier to estimate the closest k lexical neighbours to a given target word. From our perspective, the most interesting aspect of ‘features-and-classes’ approaches is that they are, in principle, able to address the problem of bootstrapping non-segmental processes in introflexive (root and pattern), tonal, and apophony-based morphologies.

... Adaptive word coding A fundamental ability at the heart of the human language faculty is to recognize positionindependent patterns in symbolic time series, like the word book in handbook, the Arabic verb root embedded in kataba ‘he wrote’ and yaktubu ‘he writes’, or the German verb stem shared by machen ‘make’ and gemacht ‘made’ (past participle). The vast literature on working memory (Baddeley ) and visual word recognition (see Henson ; Davis ; Norris ) has addressed this issue by investigating models of word coding. In Davis’s () ‘spatial encoding’, a letter’s position is represented by an activation gradient, increasing with letter position and subject to uncertainty. Two neighbouring words like stop and post activate the same letter receptors, with the different letter arrangements causing relatively different activation patterns (Figure .). Due to position uncertainty, letters can be perceived as shifted from their real position, with words with adjacent transposed letters (e.g. stop and sotp) being perceived more similar than words with non-adjacent transposed letters (stop and ptos). Spatial coding proves to fit the effects of competition for lexical selection within densely populated neighbourhoods, as well as priming effects between neighbours (Davis ). Pirrelli and colleagues (Pirrelli, Ferro, and Calderone ; Pirrelli, Ferro, and Marzi ; Marzi et al. ) model lexical memories through Temporal Self-Organizing Maps (TSOMs). Nodes in a TSOM develop sensitivity to symbols in specific temporal contexts. Input words 5 4 3 2 1 0

O

P STOP

S

T

POST

 .. Examples of spatial coding for the words STOP and POST Letter order is represented as a gradient of activation over letter nodes subject to uncertainty (error bars). Letter receptors are ordered alphabetically. Source: Adapted from Davis ().

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

 # G

 

E $

E

K

OO

N $ A

K M

M

M

 .. Examples of topological coding for German KOMMEN ‘come’ (infinitive/P and P present indicative), GEKOMMEN ‘come’ (past participle), and KAM ‘come’ (S and S past tense)

thus activate chains of more or less specialized nodes, whose topological proximity on the map correlates with input similarity (Figure .). Principles of correlative learning (Rescorla and Wagner ) make the connection strength between any two nodes contingent on how often the nodes are consecutively activated in training. As a result, high-frequency words tend to be associated with specialized, strongly connected nodes (Figure .a), while low-frequency words are associated with shared nodes reached by multiple, weak connections (Figure .b). The difference has interesting effects on acquisition. Figure . shows the comparative pace of paradigm acquisition of two TSOMs, trained on German verbs under two regimes: (a) input words are presented equally often (white circles), and (b) the same input words are presented with CELEX frequencies adjusted into the [, ] interval (dark circles) (Marzi, Ferro, and Pirrelli ; Marzi et al. ). Word frequency (dark circles) inversely correlates with time of acquisition in less regular paradigms (sein and wollen), but shows no correlation in regular paradigms (brauchen). Notably, members of a regular paradigm are acquired nearly instantaneously, whereas the time lag between the earliest and the latest acquired word in an irregular paradigm stretches over a few learning epochs. The evidence shows that irregulars are learned item-wise, while regulars are learned through shared patterns of morphological structure both within and across paradigms. Data comply with the information-theoretic conjecture that uncertainty in guessing the realization of a paradigm cell given another cell measures the challenge that a morphological system poses for learning (Ackerman, Blevins, and Malouf ; Ackerman and Malouf , ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

     (a)



(b) #

#

K

K A

ae

A

ae

O

M

M

M

$

E

M

$

E

E

$

$

O

M M

 .. A word node tree (a) and a word node graph (b) representing the German forms ‘#KAM$’(kam), ‘KaeME$’ (käme), and ‘#KOMME$’ (komme) Vertices are specialized nodes and arcs stand for weighted connections. ‘#’ and ‘$’ are respectively start-of-word and endof-word symbols.

braucht (18) brauchen (15) brauchten (4) brauchte (4) brauche (4) gebraucht (3) brauchst (2) brauchtet (1) brauchtest (1) brauchend (1)

will (84) wollen (72) wollte (56) wollten (16) willst (6) wollt (3) wolltest (2) gewollt (2) wolltet (1) wollend (1) 10 15 20 learning epoch

10 15 20 learning epoch

ist (1001) war (440) sind (400) waren (116) sein (65) bin (49) gewesen (48) bist (13) seid (4) warst (3) wart (1) seiend (1)

10 15 20 learning epoch

 .. The time course of word acquisition in three German verb subparadigms (brauchen, wollen, sein) under two training regimes: realistic distributions (dark circles) and uniform distributions (white circles) Forms are ordered by top-down decreasing token frequency (reported in brackets). Source: Marzi, Ferro, and Pirrelli ().

... Discussion The machine learning literature offers two basic strategies for morphology induction: segmentation and classification. The first one starts from the assumption that morphemes are segmentable units, and defines global criteria to optimize segmentation. Size and number of inflectional paradigms are among these criteria (}...). It is noteworthy that information-theoretic measures of data compression contributed to rekindling the notion of a paradigm in morphological inquiry, emphasizing the central role of the word (as opposed to the morpheme) as the optimally sized unit for morphological description (see also Blevins ; Geertzen, Blevins, and Milin ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

The second approach dispenses with the segmentation assumption: lexical representations are redundantly specified, for discontinuous patterns of morphological structure to emerge. Stochastic algorithms prove to be able to prune out such a chaotic coil of multiple paths. Interestingly, n-gram based models of lexical representations such as those described in }... are in keeping with independently motivated psycholinguistic models for letter position coding such as Grainger and Van Heuven’s open bigram model () and Baayen and colleagues’ a-morphous lexical representations (). While modelling letter positions with bigrams remains a controversial issue (Kinoshita and Norris ), we find it unlikely that speakers do not home in, through learning, on stable lexical representations, reflecting global constraints of paradigmatic organization. Cognitively more plausible word-coding models, such as those found in the rich literature on working memory and visual word recognition (}...), provide valuable insights into lexical representation issues.

. S   

.................................................................................................................................. Word storage and processing have traditionally been modelled according to distinct theoretical paradigms, in line with the Declarative/Procedural model of lexicon and grammar (Pinker and Ullman ) and the architectural specifications of traditional desktop computers. Computational tools like FSAs, FSTs, hierarchical lexica, stochastic classifiers, and dynamic memories support a considerably less modular view, where patterns of combinatorial (rule-like) and lexical knowledge coexist and interact. This is quite an active interdisciplinary research area, and drawing conclusions can inevitably be tentative and bold. Nonetheless, here are a few take-home points. Both rule-like and idiosyncratic information can and need to be cast into the same representational framework. This seems a sensible step to take from an acquisitional perspective: morphological generalizations are better conceptualized as derivative of principles of storage and self-organization of stored items, especially when words tend to cluster in a variety of more or less regular classes, rather than being dichotomized between default and exceptional items. If, as suggested by connectionist models, processing and memory are mutually implied (Wilson ; D’Esposito ), the pre-compilation of whole-word memory ‘chunks’ in the mental lexicon is beneficial for on-line processing. It enhances the hearer’s capacity to predict an upcoming input word (Chersi et al. ; DeLong, Urbach, and Kutas ; Federmeier ), and to maintain longer and more complex words in working memory (Ma, Husain, and Bays ). Although item-wise storage offers a competitive advantage in processing and during early lexical acquisition, it may not be the most effective strategy for acquiring the long Zipfian tail of hapaxes making up the bulk of our lexicon. Units smaller than words come in handy to process novel, rare, or noisy input, and whenever memory resources (both longand short-term) break down. In the face of such conflicting processing requirements and complementary contributions, the best possible solution in a biological system like the brain is to entertain different strategies at the same time, and let them compete on different tasks (Post et al. ; Libben , ).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



This is not to imply that specialized knowledge and rule-governed behaviour operate identically. One thing is to access a word stored as a whole, another thing is to access a word by activating overlapping memory patterns such as those pictured in Figure .b. Graphlike memory structures make special processing demands, causing simultaneous activation of competing lexical entries, which in turn call for a process of output filtering by selective inhibition and control. The study of lexical self-organization is laying further emphasis on the dynamic between word co-activation and competition for lexical access (see Magnuson, Mirman, and Harris ; Dell and Cholin  for recent overviews) and on the role played by paradigmatic relations in human perception of morphological structure. Recent information-theoretic investigations of the structure of neighbour families and word paradigms substantially contributed to our understanding of their complexity, shedding light on their role in a number of word processing tasks (Kostić, Marković, and Baucal ; Moscoso del Prado Martín, Kostić, and Baayen ; Milin, Filipović Đurdjević, and Moscoso del Prado Martín ; Ackerman, Blevins, and Malouf ). The trend fits well with developments in distributional lexical semantics (Moscoso del Prado Martín ), foreshadowing prospects of methodological convergence between usage-based (cognitively oriented) and performance-based (behaviourally and psycho-computationally oriented) research in lexical modelling. Such a wealth of cross-disciplinary convergence appears to boil down, in the end, to a vindication of Hockett’s () prescient concerns. The deeper our understanding of word processing issues, the more accurate our knowledge of theoretical constraints on word structure.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

  ......................................................................................................................

              ......................................................................................................................

  

. I

.................................................................................................................................. I a study comparing American Sign Language (ASL)1 to English, Bellugi and Fischer () found that it takes twice as long to articulate isolated signs as it takes to articulate isolated words, nevertheless a given proposition can be articulated in either language in the same amount of time. How can that be? ASL makes up the time difference in at least two ways: via the rate of signing and the number and duration of pauses (Grosjean ) and via simultaneous delivery of multiple bits of information (Wilbur ), called layering. The first way involves phonetic matters outside the domain of morphology. Layering, on the other hand, is crucial to morphology. Layering is interpreted via ‘vertical processing’—processing various input types presented simultaneously. This contrasts with ‘horizontal temporal processing’—processing temporally sequential inputs. Sign languages are superior to spoken languages at vertical production because they have multiple articulators and use the phonology in meaningful ways. They are superior at vertical processing because this is a task where vision is superior to audition (Brentari ). The types of vertical production that sign languages exhibit do not interrupt the base form of words or involve discontinuous morphemes, making them easier to process than non-concatenative processes in spoken languages (Emmorey ). We might expect layering, then, to be common in sign languages, and research over the past two decades confirms that expectation (Aronoff, Meir, and Sandler ; Vermeerbergen, Leeson, and Crasborn ).

1 I give the rubrics for sign languages in English: Deutsche Gebärdensprache, for example, is referred to as GermanSL. The one exception is American Sign Language where the standard acronym is used: ASL.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



This chapter begins with the robust contributions of sign phonology to morphology, due largely to modality. I next point out two theoretical issues in sign language morphology, one not found in and one more extreme than in spoken language morphology. Next I outline horizontal and vertical morphological processes. Finally, I turn to the morphosyntactic process of verb agreement.

. E    

.................................................................................................................................. An overview of relevant phonological basics will allow us to understand the robust contributions of phonology to the lexicon and to signs created in conversation.

.. Sign phonology basics Sign languages have five articulatory parameters: handshape, location, movement, orientation, and non-manuals (Stokoe ). Handshape means the configuration that digits assume. Some handshapes are easy to make (these unmarked handshapes occur frequently in all sign languages), others trickier, and others difficult (these occur only in a few languages and then rarely). While handshapes can carry meaning (as discussed immediately below), a sign can be identified well enough by other parameters, particularly movement (Poizner, Bellugi, and Lutes-Driscoll ), so that if one uses an unmarked handshape throughout a whole sentence, the sentence can still largely be understood. For example, in a BritishSL story, Richard Carter uses the flat hand with the fingers not spread (the B-handshape) to make utterances signed by an owl (Sutton-Spence and Napoli ).2 Location means the place where the articulating hand(s) is/are located. If a sign moves the hand(s), it will have a starting and an ending location. Locations can be places on the body (typically from the top of the head down to the hip and along the non-moving arm/ hand), as well as neutral space. Neutral space is the area directly in front of the signer. Movement means the movement of the hand(s). Primary movement (via shoulder and/or elbow articulation) forms a path. Secondary movement (e.g. finger wiggling) has no path. Orientation concerns the direction the palm faces. Important also is facing: the direction that the leading hand part is pointing toward as the hand moves (Liddell and Johnson ; Meier ). Non-manual articulations are made by head, parts of the face, shoulders, torso. Most non-manuals (unlike the other parameters) are not specified in the lexical entry of a sign, but add separate information (lexical or functional). Typically non-manual articulations accompany manual ones (unless they are gestures, like a frown, as in spoken languages). However, some signs have separate manual and non-manual counterparts (Dively ).

2 Here I mentioned the ‘B-handshape’; linguists conventionally label handshapes with letters of the Roman alphabet or with numbers. A partial list of handshapes found in ASL appears in this Wiktionary entry: http://en.wiktionary.org/wiki/Appendix:Sign_language_handshapes.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

 .. ASL sign 

Signs can be one- or two-handed. The hand used for one-handed signs is the dominant hand: ‘H’. The nondominant is ‘H’. In two-handed signs either H is immobile, or both hands move. If H is immobile, H typically contacts or moves close to H, which, typically, has an unmarked handshape (Battison ). If both hands move, it is usually with reflexive symmetry across the sagittal plane, sometimes with alternation ( degrees out of phase) or glide (one hand higher or moved forward), but longitudinal or transverse reflexive symmetry is possible. Rotational and, rarely, translational symmetry also occur (Napoli and Wu ). In Figure ., we see the ASL sign , which is H, with G-handshape, starting location in front of the forehead and ending location in front of midchest, movement straight down, palm and extended fingers oriented toward the signer, and lips tightly rounded.3

.. Phonological parameters and meaning Phonological parameters (and their component features) in sign languages can be meaningless elements, just as in spoken languages (Sandler , and see references within). However, a parameter can also carry sense. This is common of the non-manuals. It is also common of the manuals, but in more restricted ways: via ion-morphs and iconicity.

... The non-manuals I begin with mouth actions (Boyes Braem and Sutton-Spence ), which are often part of a whole-face gesture showing affect, such as dismay or happiness. Other times they are an obligatory part of the sign (as in  in Figure .). One can (and many do) mouth the spoken word while manually signing it. However, mouthing can also be of a word distinct from the manual sign, creating a kind of compound unique to sign languages in which the components are simultaneous. With 4 ‘eat’ in NetherlandsSL one can mouth brood ‘bread’, yielding ‘bread-eating’;5  in BritishSL can be accompanied by the mouthing baby, to mean ‘baby mouse’ (Crasborn et al. : ).

3 4 5

All figures are reprinted from www.lifeprint.com with the kind permission of Bill Vicars. Examples are in the form presented in the cited literature. This is my translation. The source had ‘eat bread’, which could be mistaken as phrasal.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



Other mouth actions modify a sign’s sense, typically regarding manner or degree (Liddell ; Meir and Sandler ): in some languages (including ASL and BritishSL) protruded lips can indicate ease in an action and puffed cheeks can indicate large size on a referent (Sutton-Spence and Woll ). Like other phonological features, mouth actions can spread across neighboring signs: in NetherlandsSL   ‘mean joke’ exhibits tongue protrusion with  that spreads throughout the noun phrase (Crasborn et al. : ). In general, mouth actions convey lexical information, and other non-manual articulations convey syntactic, discourse, or pragmatic information (DanishSL, Engberg-Pedersen ; ASL, Aarons et al. ; for a cross-linguistic overview, Pfau and Quer ), including negation (in GreekSL, GermanSL, and others; Klima and Bellugi ; SuttonSpence and Woll ; Antzakas and Woll ; Herrmann and Steinbach ) and question marking (Wilbur ). So, while nose wrinkling (in ASL and other sign languages) can intensify an adjective, particularly a pejorative one, this might be a facial gesture. A true exception involves HungarianSL: mouthings of spoken words add inflection, typically person, number, possession, and case (Rácz-Engelhardt ). (And see comments on EstonianSL reduplication in §..) Also, recent work on Kata Kolok, a village sign language of Indonesia, shows mouth actions that indicate verb aspect and polarity (de Vos ). Judging by the illustrations given, I believe the mouth action coincides with another non-manual articulation, with the possible exception of the evidential marker (a stiffened upper lip). If this is so, it would confirm other recent research showing that mouth shape gets involved in functional information, but only in conjunction with other non-manuals (Benitez-Quiroz et al. ). This cross-language division between mouth actions and other non-manuals is interesting; it is not a general truth across spoken languages that certain types of phonological segments are involved in derivation while others are involved in inflection. Since the mouth has far greater articulatory range and detail than other non-manuals, perhaps physiology plays a role.

... Ion-morphs Signs can fall into families linked by sense and phonology, in which up to three of the manual parameters are the same, but at least one differs (Frishberg and Gough ). In order to understand this, we first need to discuss how handshape relates to the alphabet. So long as a sign language has a manual alphabet, a sign can be fingerspelled. The frequency of fingerspelled items varies from high (NewZealandSL, McKee and Kennedy ), to moderate (ASL, MacFarlane and Morford ; Padden and Gunsauls ), to nonexistent (TaiwanSL, Fischer : ) and by region and age (BritishSL, Sutton-Spence, Woll, and Allsop ; AustralianSL, Schembri and Johnston ). Often a fingerspelled sign becomes lexicalized (dynamics/timing changes, Wilcox ; coarticulation occurs, Battison ; Brentari and Padden ; Jerde, Soechting, and Flanders ; Keane, Brentari, and Riggle ). The factors affecting the likelihood of lexicalization are phonological (Cormier, Schembri, and Woll ). Sometimes a lexicalized fingerspelling remains distinguishable from native signs in morphophonological behavior (Padden ); at other times it becomes indistinguishable (TurkishSL, Kubuş ). With one-handed alphabets (but not two-handed alphabets, Adam : ), often the handshape of a sign will be the letter of the manual alphabet that corresponds to the first letter of the written word in the ambient spoken language; ‘milk’ in MexicanSL

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

uses the L-handshape since the word for ‘milk’ is leche. ‘Initialization’ is used heavily by some languages (MexicanSL, Faurot et al. ) and hardly at all by others (GreekSL, Kourbetis and Hoffmeister ). It is common in ASL name signs (Supalla ; Lucas, Bayley, and Valli ; Stephens ). Sometimes initialized signs can have a handshape change, where the second handshape corresponds to some other letter of the written word ( in ASL has L>S;  has B>S, Mirus, Fisher, and Napoli ). Additionally, we find acronyms: in ASL the I-L-handshape is used for the sign -, where we see ‘I’ and ‘L’ with a superimposed ‘Y’. Initialization plays another role in sign morphology. In ASL the signs , , , ,  and more are made with the same manual parameters except the handshape varies from C to F, T, G, or S, and so on. The set of movement, location, and orientation parameters in that lexical family convey the sense ‘group’; they are an ion-morph, a partially complete morpheme which needs to attract another parameter (here handshape) in order to be complete; the handshape indicates the particular group (Fernald and Napoli ). Figure .a shows all parameters for , while Figure .b shows the handshape parameter for , , and  at the outset of articulating these signs. Ion-morphs can get their general sense from just a single parameter; in ASL the side of the forehead is associated with cognition: , , , , , -, etc. For this extended family, the ion-morph consists of a fixed location, while movement, handshape, and orientation vary.

(a)

 .. ASL sign 

(b)

 .. Handshapes for ASL signs , , 

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



The side of the forehead can carry other senses, too. In ASL the family of kinship signs uses a contrast of location for gender; forehead indicates male and side of the jaw indicates female. Perhaps the sense of female associated with the side of the jaw is the reason why , , and  are made there. Likewise, features of the movement parameter can carry sense; in ASL the so-called  path shape is often used for place names, forming a large lexical family. While the discussion above draws mostly from ASL data, similar phenomena appear in AustralianSL (Johnston and Schembri ), BritishSL (Brennan ), DanishSL (Engberg-Pedersen ), ItalianSL (Pietrandrea ), IsraeliSL (Fuks and Tobin ), NetherlandsSL (Kooij ), and SwedishSL (Wallin ). Ion-morphs have similarities to two phenomena in spoken languages. First, phonaesthemes are common throughout Indo-European (Firth ) and Austronesian (Blust ) languages. For example, in Indo-European languages initial [st] is associated with hindered movement, physical (stay, stop, stumble) or mental/emotional (stupid, strict, staid); but not obligatorily (step, streak, strew). Likewise, in ASL the open--handshape is the fixed parameter of an ion-morph associated with the general sense of feelings: , , , as well as , , , by metaphorical extension from the physical to the psychological (Taub : ff.); but not obligatorily (, ). Second, literal roots and vowel melodies are common throughout Semitic languages (McCarthy ): a root with an underdetermined sense (such as ‘having to do with reading’) is identified by a given string of consonants, and details that allow a particular sense to emerge are supplied by a given string of vowels (the melody), where both are mapped onto a template that determines the arrangement of consonant and vowel sequences, to yield a well-formed word (meaning ‘book’, ‘read’, ‘write’, ‘scholar’). Ionmorphs are similar, where the fixed parameter or set of parameters carry a general meaning, and the varying parameter or parameters allow us to zoom in on the particular sense of the whole sign. (Similar morphological constructs in Athabaskan languages are called satellites, Faltz ; Fernald .)

... Iconicity If iconicity means a non-random connection between form and meaning, where the form brings to mind the meaning, sign languages exhibit it heavily. For example,  in ASL has an opening and closing of the jaw; the non-manual mimics the action of chewing gum with the actual articulators used in chewing gum. And, in general, word formation exploits the availability of two hands in iconic ways, encoding particular types of relationships, such as interaction, location, dimension, or composition (Lepic et al. ). For example, signs for ‘meet’ typically involve two hands moving toward each other because of the symmetrical semantic relationship, not just for phonological reasons; in fact, concepts that are ‘inherently plural’ typically involve both hands. But iconicity is also often metaphorical (ASL, Wilcox , Taub ; BritishSL, Woll ; FrenchSL, Bouvet ; IsraeliSL, Meir ; ItalianSL, Cameracanna et al. , Pizzuto et al. , Russo, Giuranna, and Pizzuto , Pietrandrea ; JapaneseSL, Ogawa , Herlofsky ). Let me offer an example my eye picks out. In ASL the signs , , and  form a family sharing location, movement, and orientation, but handshape varies by finger extension. ‘Suppose’ does not commit the signer to the proposition; the pinky extends. A thought shows more commitment; the index finger extends. Knowing commits the signer fully; all fingers extend. So size

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

and number of the extended fingers (physical property) indicates assertion strength (abstract property). Nevertheless, nonsigners guess correctly at only about  percent of signs in isolation and cannot guess the meaning of a conversation (ASL, Klima and Bellugi ; PolishSL, Fabisiak , cited in Łozińska and Rutkowski : ). But if they are told what a sign means, they can agree on the iconic basis (Bellugi and Klima ). So what is going on? Wilcox (, pointing to De Jorio  []) distinguishes between manual gestures and expressive gestures (facial articulations and movement dynamics). He lays out a route from manual gesture to lexical signs to grammatical (or functional) signs and a separate route from expressive gestures to prosodic markers and, again, grammatical morphology. His suggestions account for a wide range of iconicity throughout the grammar of sign languages. But recognizing that iconicity depends largely on understanding the grammar of sign languages. Sign languages have two types of signs: so-called ‘frozen’ ones (ordinary lexical items), which can be found in a dictionary and which are often being coined via initialization and ion-morphs, and so-called ‘productive’ ones (made-up-on-the-spot), which do not appear in dictionaries (Bellugi and Klima ; Russo ; among many). Lexical signs often start out iconic; after all, if you make up a sign for something, why not ‘draw’ it in the air to the extent possible? For example, signs for ‘elephant’ regularly involve moving the hand from the nose outward—in AdamorobeSL one moves a bent-L-handshape from the nose downward in an ark (Nyst : ). Still, many aspects of an object or an action could be chosen as the ‘iconic base’, so it is not surprising that the lexicon varies among sign languages/cultures. Further, even initially transparent signs over time yield to tendencies of the grammar that make their phonological shape more arbitrary (Frishberg ). Contrasted to these are ‘productive signs’ created to express an entire predicate or even a whole event. Thus they are polymorphemic, iconic, and particular to the sign-act situation (Supalla ; Brennan , ). Concepts related to events—such as size, shape, location, source, theme, path, goal—all lend themselves to being represented in spatial language. These productive signs are called ‘classifier constructions’. For the sentence ‘My friend walks from the store to the park’, one might sign  , then  and spatially index it, then  and spatially index it, then do the classifier construction seen in Figure .. Here the ‘classifier’ handshape (discussed in the next paragraph) represents my friend, the initial location represents the store, the final location represents the park and the movement of the hand represents walking from one to the other (Liddell , ).

 .. ASL classifier construction for ‘Z walks from X to Y’

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



While Figure . is relatively simple, these constructions can be complex and are most readily understood through demonstration. To express ‘A leaf falls and then lies on the ground’, the -handshape might move downward in a side-to-side way and stop when it meets H, which has been waiting there with the forearm extended horizontally oriented palm-down with the flat-B-handshape; this is a description for ItalianSL (Russo : –) and, importantly, many other sign languages. The H handshape represents the leaf, while H represents the ground (Corazza ). The starting location symbolizes the tree; the ending location, the ground. The movement and orientation of H show the movement of the leaf and its position as it falls. Crucially,  in ItalianSL is a two-handed sign where both hands have the L-handshape. None of  ’ parameters appears in the classifier construction. So this is not an instance of the lexical sign moving through space. This sentence is a single sign with one set of phonological parameters. The handshapes are called classifiers because they could be representing some other entity with similar physical characteristics6 (however, that label does not align well with its use in spoken languages; Schembri ). For example, one might sign ‘A piece of paper falls and then lies on the table’ the same way, since a piece of paper is thin and flat like a leaf, and a table has a flat surface like the ground. For this reason, generally one articulates the lexical signs for the participants in the action first— and , or  and —and then signs the classifier construction. The classifier construction for the leaf event involves two entity ‘classifiers’ (Supalla : –); the iconic handshapes (H’s handshape ‘looks like’ the leaf and H’s handshape ‘looks like’ the ground). Not all classifiers are themselves iconic, however. In ItalianSL the -handshape represents a variety of moving objects, including people, vehicles, and objects in rapid sequence. To express ‘A car stops at a traffic light’, H would articulate the sign for ‘traffic light’ while H would assume the -handshape and move toward H and then stop (Russo : –). The handshape of H is arbitrary, but the rest of the phonological parameters in H’s articulation are iconic, symbolizing meaningful parts of the action. (Notice further that the two hands express different propositions simultaneously: ‘here’s a traffic light’ and ‘a car stops at it’, see Napoli and Sutton-Spence .) Another type is the ‘handling’ classifier. One might express moving a bucket by using a handshape that reflects how one carries it. The hand could close into a fist, palm down, to indicate that the bucket was grasped by a handle, then move from one spatially indexed location to another. Alternatively, the hands could face each other, cupping the air with thumbs extended to indicate that the bucket was grasped by its sides, and then move. One- and two-handed classifiers are found in many sign languages. ThaiSL has  one-handed classifiers,  two-handed classifiers where both hands assume the same handshape, and  two-handed classifiers where the hands assume different handshapes (Tumtavitikul, Niwataphant, and Dill ). Classifier constructions can lay the foundation for lexical signs. Padden et al. () show that for hand-held tools (combs, hammers, toothbrushes) different languages exhibit preferential patterning for either the handling classifier or, instead, an instrument classifier (where the handshape represents the tool) as a base for the lexical item. Al-Sayyid

6

A list of classifiers in ASL is given here: https://www.lifeprint.com/asl101/pages-signs/classifiers/ classifiers-main.htm

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

BedouinSL and ASL prefer instrument, while New ZealandSL prefers handling. Thus sign languages differ in their overall preferences for how they classify words, as spoken languages do (consider count vs. mass nouns, for instance), and they show us new (visual) criteria for classifications. Most sign languages use classifier constructions heavily and in similar ways, but not all. AdamorobeSL has (little to) no entity classifiers (Nyst ). Cogill-Koetz (a, b) argues that classifier constructions are not linguistic but, rather, belong to a system of visual representation that is part of human communication. Schembri, Jones, and Burnham () study AustralianSL, TaiwaneseSL, and the gestures of nonsigners, and show that handshape may be morphological, while location and movement may be gestural, suggesting that classifier constructions are a blend between language and gesture. Benedicto and Brentari (), however, argue that classifiers in ASL are distinguished phonologically and syntactically according to the syntactic valency of the predicate. The various kinds of iconicity described here fall outside the common presumption that there is an arbitrary relationship between form and meaning, which has been with us since the work of Ferdinand de Saussure (), but has been argued to go back as far as Aristotle (Richards ). When we look outside the Indo-European language family, however, we find that iconicity includes not just onomatopoeia, and somewhat episodic and exotic associations between particular phonemes and senses (Hinton, Nichols, and Ohala ; Haynie, Bowern, and LaPalombara ); it also includes robust patterns of sound– meaning associations that tap into the “sensory, motor and affective experiences as well as aspects of the spatio-temporal unfolding of an event” (Vigliocco, Perniss, and Vinson : ). We also see these patterns in Japanese, Korean, Southeast Asian languages, sub-Saharan African languages, Australian Aboriginal languages, South American indigenous languages, and Balto-Finnic languages (for references and discussion, Perniss, Thompson, and Vigliocco ). In fact, it has been argued that direct quotations are iconic (Davidson ), as are reduplications (Fischer ), so all languages use iconicity (Michelucci, Fischer, and Ljungberg ). Still, iconicity is pervasive in sign languages, greatly facilitated by the visual possibilities that come from being able to see the large and versatile movements of manual and non-manual articulators. The difference is one of degree (Meir ).

. T   

.................................................................................................................................. Scholars of sign language morphology confront many of the same issues scholars of spoken language morphology confront (as §. and the following sections show). Still, there are new issues that face them, and familiar issues that are particularly extreme. I have left one of these for §.: agreement.

.. New issue: Complexity vs. simplicity Unlike spoken languages, sign languages tend to have many morphological commonalities. Those commonalities are of two contrasting types. On the one hand, sign language morphology appears complex, given verb inflection (see §.) and classifier constructions

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



(see §.). On the other hand, sign language morphology appears simple in that there is little affixation (see §.) and the affixes that do appear seem to have evolved from lexical items via grammaticization (Campbell and Janda ), are derivational, and differ from language to language. As Aronoff, Meir, and Sandler (: ) say, “Sign languages seem to present the impossible combination of Navajo-like and Tok-Pisin-like languages, a typological puzzle.” Their account of this apparent paradox lies in the modality and history of sign languages. The modality allows for layering and iconicity, yielding a rich morphology. But, like creoles, sign languages are young. The oldest sign language communities are under  years old, and many are fewer than  years old. Since affixes are the product of grammaticalization, it takes time for affixes to develop—young languages simply have not had enough time yet. Further, deaf children tend not to learn sign language from their parents (most of whom are hearing), but, instead, from inconsistent and impoverished input—just as first-generation children forming a creole in a pidgin environment do. Attributing the common complexities of sign language morphology to modality is wellaccepted. However, attributing the common simplicities (the characteristics of affixation) to their youth is a new hypothesis. Earlier work argued that the paucity of affixation was due to modality; after all, each affix theoretically adds time, and, as noted at the outset of this chapter, time is of the essence in sign language morphology (Bellugi and Fischer ; Klima and Bellugi ; Meier ; Emmorey ; Meier and Willerman ; for an overview, see Sandler and Lillo-Martin : Unit ). Which is the most empirically adequate account of affixation? We return to this matter in §..7

.. New issue: reactive effort The articulators in sign languages are larger and heavier than the articulators in spoken languages. In studies of signs in which the two hands move in reflexive symmetry across the sagittal plane, both in phase and alternately, we find that movements that are likely to induce torque and, thus, make the torso twist or rock require reactive effort to hold the torso stable. Since there is a drive toward ease of articulation in sign languages just as there is in spoken languages (Napoli, Sanders, and Wright ), such signs appear with far less frequency across the lexicon than would be expected if they were randomly distributed (Sanders and Napoli a, b, where the first study is of three sign languages and the second study is of an additional twenty-four languages). Thus biomechanical factors influence the shape of the lexicon in sign languages. So far as we know, no similar claims have been made about spoken language lexicons.

.. Familiar issues: roots and lexical categories The definition of ‘root’ in a sign language is problematic. For spoken languages, a monomorphemic nonaffix is a root (Selkirk : ). But monomorphemic signs are

7 Similar considerations come up in syntax. Sign languages all allow the word order SOV and all (so far) have been argued to have underlying order of SOV or SVO, as is the case for creoles. Napoli and Sutton-Spence () consider a modality vs. a young-language account, and argue for the former.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

harder to come by. As we saw in §., any one or more of the phonological parameters might be iconic in some way or might be connected to a sense via an arbitrary factor (such as initialization), blurring the distinction between phonology and morphology. Even a noun as simple as  can be seen to include the sense ‘male’ as separate from the rest of its meaning (by contrasting it to ); likewise color terms like  can be seen to include the sense ‘color’ as separate from the rest of its meaning (by contrasting it with  and others). Further, we will see in §§. and . that unarguably complex signs tend to have the rhythmic and timing properties of all other signs, so we get no help from prosody. How, then, do we distinguish between simple and complex signs? Few have addressed the issue head-on. An exception is Wilbur (: f.), who claims there are two kinds of derivational processes, ones that apply at the root level and ones that apply at the word level. Word-level derivational processes affect path movement, while root-level derivational processes affect only local movement. This approach meets problems, however. Consider ASL , which is derived from ASL  (Frishberg ):  has finger wiggling and  has a spritz (an abrupt opening of the fist). So this derivation affects only local movement and therefore this process is root-level. But the signs  and  are likewise related via the same process (from finger wiggling to a spritz), where  is open to a polymorphemic analysis (H being a classifier for an object being looked at and H showing the study action). So now that same process, which affects only local movement, is applying at the word level. Further, this method of distinguishing roots is not widely applicable, since derivational processes that affect only local movement are few and do not occur with most signs. My position is that the notion of root has yet to be proven pertinent. Instead, morphology makes reference to the two units of the ion-morph (which has similarities to Semitic roots, as noted earlier) and the ‘simple sign’ (a notion developed in §.), which has a visual unity dependent upon having only one set of phonological parameters and, probably, a characteristic duration. Nevertheless, sign language linguists use the term ‘root’, and I have done the same in my discussion of affixation and reduplication for ease of exposition. Standards for identification of lexical categories can, likewise, be elusive (Meir ), as in spoken languages (Haspelmath ). No particular question is prominent in the sign language literature; rather, all categories are problematic. First, for the three major categories of V, N, and A, tried-and-true diagnostics (co-occurrence of a numeral with a sign for nouns; distinction between negative markers for verbs and adjectives vs. nouns, and so on) that hold in one language (ASL, Padden ) do not hold for another (Indo-PakistaniSL, Zeshan ; GermanSL, Erlenkamp ). Second, identification of prepositions has not been a major concern in the linguistic literature, so far as I know, perhaps because their use is minimal, where the phonological parameters (particularly location and movement) of a classifier construction (Emmorey et al. ) or dynamic characteristics (speed, rhythm) of a lexical verb (Wilbur ) do the job undertaken by a PP or a case-marked NP in a spoken language. Additionally, iconicity can interfere in category identification: in classifier constructions agent, theme, goal, action, and other information roll up into a single sign (Zeshan ). Schwager and Zeshan () look at semantic, syntactic, and morphological behavior in three sign languages and conclude that lexicalization is culturally determined, that a large database is necessary to figure out which lexical categories get mapped onto which syntactic functions, and that morphological operations are more useful in recognizing categories in

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



languages that have a larger array of such operations to draw on (RussianSL and GermanSL, in contrast to Kata Kolok). But they suggest universal lexical category identification in sign languages will become possible as we study larger databases. In the spirit of their optimism, this chapter plunges ahead.

. M 

.................................................................................................................................. Given the discussion in §., you might expect no horizontal temporal processes in sign languages; you would be wrong. Sign languages exhibit both horizontal and vertical morphology.

.. Horizontal temporal morphology Affixation, compounding, and reduplication are horizontal in that they add phonological parameters over sequential time. The first two, however, turn out to adjust movement in ways that make the overall time it takes to articulate the sign no greater than the time it takes to articulate the root (in the case of affixation) or the first element (in the case of compounds). In this way they are like cliticization in IsraeliSL, where a complex word obeys well-formedness constraints on simple signs (Sandler ). In contrast, reduplication truly lengthens the duration of a sign.

... Affixation Affixation is uncommon in sign languages; a language might have no affixes (SwedishSL, Bergman ), a couple (IsraeliSL, Meir and Sandler ), or a handful (ASL, Sandler and Lillo-Martin ). All have been argued to be derivational, with the exception of certain verb agreement markers in GermanSL (Glück and Pfau ; but see Keller , who argues these morphemes are pronouns). Examination of the IsraeliSL negative suffix ‘not-exist’ allows us to delve into the issue of affixhood identification, as problematic for sign languages as for spoken languages (Haspelmath ). ‘Not-exist’ attaches to nouns or adjectives to derive adjectives (Meir : ): +- ‘uninteresting’, +- ‘unimportant, insignificant’, +- ‘isn’t worth it’. It is similar in form to the lexical sign -, from which it probably derives. Meir () lists reasons why this morpheme is best analyzed as a suffix rather than an independent sign. First, whether it is one-handed or two-handed follows from whether the base sign is one-handed or two-handed, just as allomorphs of affixes in spoken languages depend upon the base form; but - is two-handed. Second, the movement of the morpheme differs from the movement in - in several ways, including that it is shorter in duration and has a shorter path; sometimes the base sign and suffix are produced with a single movement, a common reduction in word-formation processes (Liddell and Johnson ). Third, the sense is unpredictable, a characteristic of derivational morphology (but also compounding); +- means ‘doesn’t interest me at all’ rather than ‘surpriseless’, +- means ‘doesn’t care about it’ rather than

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

‘without enthusiasm’.8 Fourth, signs with this morpheme are not accompanied by the negative headshake typical of negative sentences, including those with -. But might these single signs be compounds, especially since compounding is common in sign languages (see §...)? Meir argues these signs are suffixations because: • the suffix attaches to a wide range of base forms, whereas compounding is more restricted; • the base determines the number of hands in the suffix, whereas this is less common in compounds; • the suffix determines the category of the final form, whereas in compounds in IsraeliSL the first element is the head. Unfortunately, analogous arguments may not carry over to other sign languages. While most proposals of sign affixes concern suffixes, Aronoff, Meir, and Sandler () argue for the existence of ‘sense’ prefixes in IsraeliSL. The prefix is always one of five morphemes, meaning ‘ear’, ‘eye’, ‘nose’, ‘head’, ‘mouth’. Again, the phonological form of the prefix is integrated into the base. The strongest evidence that the resultant forms involve prefixation is their paucity and the fact that the resultant words are always verbs; still a compound analysis exists that eliminates the need to add prefixation to the morphology of sign languages (Brennan ). Repeatedly, affixes are integrated into the base via movement changes in the base, the affix, or both. The result is a single-syllable sign with the duration of a simple sign (although the result of ‘sense’ prefixation in IsraeliSL is an iamb; Aronoff, Meir, and Sandler : ). For example, ASL  is  plus the agentive suffix (Aronoff, Meir, and Sandler ). The sign used to consist of a one-handed movement (from H-palm to the forehead) followed by a two-handed movement (downward in neutral space). These days, in the Philadelphia area,  consists of a single path movement (from H-palm (slightly up and then) downward),9 just as the fusion of an affix to a stem in spoken languages can maintain the rhythm/timing of the stem (as in English business vs. laziness, where the first is a trochee like busy, but the second is a dactyl contrasting with the trochee lazy).

... Compounding Compounding is common. Compounds typically have unpredictable meaning, and a movement that is the elimination of the movement parameter of both elements in favor of the transition movement from the location of one element to the location of the next (Klima and Bellugi : ). The resulting movement appears typical of a simple sign, with the same duration. BritishSL exemplifies this nicely (Sutton-Spence and Woll : ). The rhythm of a compound sign is more like that of a simple sign than of a sequence of two signs since:

8 The translations given here are from the source article. I don’t know whether they indicate that the complex sign is a verb rather than an adjective. 9 Further, the older version of the sign begins with H palm facing up and H palm facing down, and ends with both facing contralaterally. But in the newer version the palms are oriented contralaterally from the start. So the orientation of the affix spreads over the whole sign, as often happens in compounds (see §...).

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



• • • •

the hold at the end of the first element is lost; any repeated movement in the second element is lost; H of the second element is established and waiting at the time the first element starts; the duration of the first element is noticeably shorter than the second (regardless of which is the head); • the transition between the two elements is rapid. The BSL sign , for example, is ^ (‘^’ indicates compound juncture), where the outward movement of  is eliminated in favor of a simple downward transition movement to the location of . T is a one-handed sign while  is two-handed; in  the H of  is already in place as the whole compound initiates. So this compound has been lexicalized. In Figure .c we see the ASL lexicalized compound , which combines  (Figure .a) plus  (Figure .b). The initial location and orientation of the compound is determined by the first element and the handshape is determined by the second. The movement is simply a transition from the location of the first element to the location of the second. The compound is two-handed like the second element.

 .. ASL sign 

 .. ASL sign 

 .. ASL sign 

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

Like spoken languages, sign languages offer examples in which the order of the elements is fixed: BrazilianSL ^ ‘house^cross’, means ‘church’, but the opposite order is incomprehensible or means something bizarre, like a cross used as a house (Figueiredo-Silva and Sell ). (This is the expectation if NN compounds are leftheaded, although Figueiredo-Silva and Sell do not discuss headedness.) Complex compounds arise; ^^ ‘man^fix^electricity’/‘electrician’ exists beside ^ ‘mechanic’. As in two-element compounds, the movements of elements in complex compounds fuse. Al-Sayyid BedouinSL demonstrates fixed-order as well as free-order compounds (Meir et al. ): in elicitation both ^ and ^ were offered for ‘stove/range top’. However, conventionalized compounds exhibit a fixed order. The question of ordering of elements within a compound need not arise, however, since the presence of two manual articulators affords the possibility of simultaneous elements. In BrazilianSL ‘honeymoon’ is rendered by articulating  ‘sex’ with one hand and  ‘travel’ with the other (Rodero Takahira and Minussi ). The BritishSL compound  (a communication software) consists of H in the handshape of  and H in the handshape of  (maintaining the finger wiggling of ), H located above H in neutral space (Brennan : ). However, in ordinary conversation both hands tend to assume one or the other handshape (Sutton-Spence and Woll : ). Also, recall from §. that a mouthing action allows compounds with simultaneous elements. That last kind of change regarding handshape is typical: handshape and orientation features spread (anticipation or perseveration, Liddell and Johnson ) in sequential and simultaneous compounding. Given that movement changes as well, compounds, like affixation, undergo phonological changes that obscure iconicity in the contributing elements (ASL, Frishberg ; PolishSL, Łozińska and Rutkowski ), yielding an overall visual unity that makes them phonologically like simple signs (and see Sandler  for a formalization of length limits on signs). This fact is coherent with the production and processing needs of sign languages, mentioned in the introduction. Still, the fact that sequential compounds are numerous whereas affixes are few calls for explanation. As discussed earlier, Aronoff, Meir, and Sandler () attribute lack of affixation to youth; grammaticization takes time. Compounds, however, are not the product of grammaticization, so they can occur in young languages. In fact, compounds appear to be the source of affixes in sign languages, so the fact that only derivational affixation has been attested follows from the fact that compounding is derivational, not inflectional (pace Glück and Pfau ). Sign compounding, then, lends support to the Split Morphology Hypothesis (Anderson b). A final warning is in order, though: distinguishing between compounding and affixation is notoriously hard, since phonological fusion and semantic idiosyncrasy characterize both. Certainly, fixed order of root and affix is typical of affixation, and we do not expect one hand to articulate a root or stem while the other hand articulates an affix—so compounds differ here. But further work is needed.

... Reduplication Reduplication is common, and has a wide range of uses, as in spoken languages (Wilbur ). Some signs have internal repetition. Generally, their form is root–root. In the sign for ‘Rome’ in ItalianSL, both hands have the H-handshape and the ulnar side of the extended fingers of H tap the radial side of the extended fingers of H twice. But reduplication can

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



result in two, three, or more articulations of the root (Fischer ), where two or more iterations are not contrastive (Channon ). Reduplication for intensification can depend on utterance prosody, as in IsraeliSL (Nespor and Sandler ) and QuebecSL (Miller ). On the other hand, New ZealandSL exhibits reduplication for intensification that is not prosody dependent (Wallingford ). Reduplication also occurs in aspectual modifications of changeable-state verbs and adjectives (Klima and Bellugi ), as in SwedishSL (Bergman and Dahl : f.). Movement dynamics in aspectual reduplication reflect information about the predicate (Wilbur ; Rathmann ). Durative/stative aspect in ASL involves continuous loops of movement (Sandler : ), whereas iterative and perseverative-punctual aspects in MicronesianSL have stops between movement repetitions (Anderson ). The number of hands moving also matters, as in the allocative aspect in ASL (Klima and Bellugi ). In many sign (and spoken) languages, reduplication of a noun indicates plurality. Strategies for reduplication can depend on phonological factors (as in GermanSL, Pfau and Steinbach ; ItalianSL, Pizzuto and Corazza ; BritishSL, Sutton-Spence and Woll : –; and NetherlandsSL, Nijhof and Zwitserlood ; Harder ). In AustrianSL, some two-handed nouns can be pluralized by changing identical movement to alternating movement (Skant et al. : –). EstonianSL uses movement reduplication or hand reduplication to pluralize nouns (Miljan : ). In AustralianSL signers can reduplicate in multiple ways when pluralizing pointing pronouns, including using hand reduplication and repeating location (Johnston : f.). In these uses, reduplication is iconic; repetitions intensify or indicate multiplicity in a similar way to reduplication in spoken languages (Börstell ). This contrasts with noniconic movement changes, such as the dynamics of another intensifier in ASL: an initial hold, a tense quick motion, then an abrupt release (Frishberg , cited in Wilcox ). This is not iconic;  can be intensified this way. Iconic reduplication can occur in languages which are otherwise without inflection, thus it has been argued to be ideophonic (Bergman and Dahl , who compare a number of spoken languages to SwedishSL, but see also Pagy  on BrazilianSL). Ideophonic morphology is found in many spoken languages, including Kammu (in South-East Asia) and African languages of the Gbe group, as well as Klao and Ewe (Fischer ). Verbs can also undergo reduplication that changes spatial features, where these modifications reflect information about arguments (Wilbur ). If one did an action toward multiple individuals, one might repeat the sign, moving toward a different location each time. Multiple types of reduplications can be added to the same root. G in ASL can have durative aspect (reduplicative circling), and that form can have the distributive sense added (durative circling is repeated at each spatial location that the giving is distributed across), and, finally, if one continued to give repeatedly to each individual, iterative aspect could be added (making the whole thing repeat a few times) (Wilbur : figure ). Reciprocity can be indicated by ‘backward’ reduplication. In GermanSL to indicate that the two of us gave each other flowers,  would be articulated with H moving forward while H moves backward (Pfau and Steinbach , ). Reduplication causes increased duration, so its occurrence goes against general morphological tendencies. One account would be to analyze the above instances as belonging to a visual communicative system other than language, since they involve ‘drawing in the air’ in

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

a spatially meaningful way (as in ‘backward’ reduplication) or a multiplicative way, thus they are nonlinguistic. On the other hand, there are a few instances of derivational reduplication. Activity verbs that have simple movements can undergo trilling (rapid, repeated movement) to yield gerundive nouns (Padden and Perlmutter ). Time expression nouns reduplicate to yield adjectives:  >  with a slightly circular movement, and  >  with a repeated brush of H (Paul : ). While these are open to an iconic explanation, other derivations are not: in ASL many verbs with a single long movement use two short movements to derive a noun ( > ) (Supalla and Newport ; and compare with Italian V > N reduplication, as in lecca lecca ‘lollipop’ lit. ‘lick lick’ ). Thus reduplication is an exceptional horizontal morphological process, at least in ASL. Still, like simple signs, reduplicated signs have only one set of phonological parameters that are merely repeated.

.. Vertical morphology This chapter started with a note on vertical morphology. Here, we return to two uses of vertical morphology: incorporation and blends.

... Incorporation Many sign languages exhibit incorporation of numerals (GermanSL, Mathur and Rathmann ; ASL, Liddell ; JapaneseSL, Mathur and Rathmann , Fischer, Hung, and Liu , Ktejik ; TaiwaneseSL, Fischer, Hung, and Liu ; KenyanSL, Morgan ). H assumes the shape of a numeral while other parameters remain unchanged (but complications arise, as in Indo-PakistaniSL, Zeshan : ). Typical candidates for numeral incorporation include quantifiable units of time, money, age, and grades in school (but see Liddell : –). Numeral incorporation is idiosyncratic, with dialectal variation (Prillwitz and Leven : ) and boundaries on the numerals can vary among paradigms and across speakers (Liddell : ). It can be blocked by phonological characteristics of the particular numeral, for example secondary movement such as the shake in ASL  or the flick in ASL  (ASL, GermanSL, and JapaneseSL, Mathur and Rathmann ; KenyanSL, Morgan ). Note that, generally speaking, incorporation is limited to numerals (though see Woodward and Desantis , who argue that there is negative incorporation in ASL, inherited from FrenchSL).

... Blends The meaning of a blend comes from a combination of the meanings of the input signs, as in compounding and incorporation. Blending differs from incorporation in that it does not have a complete sign fused into another sign. It differs from compounding in that it does not have one sign followed by another sign (often with spreading of features). There is one set of phonological parameters, where some come from one component sign and some from the other. They therefore have the timing of a single sign. Members of lexical families (discussed in §...) are open to an analysis as blends (Lepic ). However, blends are distinguished from lexical families by the fact that they are isolated examples; they give delight because of their cleverness. Lexical families, in contrast, are held together by a

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



 .. ASL sign 

general sense encoded in certain phonological parameters, plus a variable sense encoded in other phonological parameters. That general sense (the ion-morph) is what holds the family together. In GermanSL blending occurs as a production error. The signs  ‘marriage’ and  ‘wedding’ are semantically connected, but articulatorily different. Both signs have H as the location. An attested blend in a slip-of-the-hand combines the Y-handshape and the path movement of H in  with the orientation and the particular location of H in  (Hohenberger, Happ, and Leuninger : –). Blends are common in creative language—jokes, poems (Sutton-Spence and Napoli ), and taboo terms (Mirus, Fisher, and Napoli ). ASL  blends the location and movement of  with the handshape of the taboo finger gesture, seen in Figure .. Though their verticality would lead us to expect blends to be common in conversation, they are not. Perhaps their very cleverness conspires against their diffusion.

. M

.................................................................................................................................. A huge part of what one might put into a morphosyntax section is already handled in §.: classifier constructions. It was important to our understanding of iconicity to place it there, and that placement allowed §. to be succinct. Another morphosyntactic issue was handled in §.: distributives and reciprocals—which use reduplication. Thus, here I discuss only verb agreement. Many verbs treat the signer’s body as subject; ‘eat’ in many languages brings the hand to the signer’s mouth, regardless of who is eating. ‘Subject’ here, then, is not defined in syntactic (tree configuration) terms but as a lexical notion (Meir et al. ) inherent in any verb that takes an external argument (Williams : ). Other verbs may agree with the subject (rather than having the signer embody the subject) and, typically, with objects. Agreement happens vertically via spatial indexing: the signer indicates (manually or nonmanually) that certain spatial locations represent referents (Padden ; Meir ). In signing that ‘she’ on the left side of the signer was acting upon ‘him’ on the right side of the signer, H would move from left to right for  ‘ask’ in GermanSL,  in ASL,

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



  

 .. ASL sign  indicating ‘I show you’

 in AustralianSL, and   ‘advise’ in JapaneseSL (Mathur and Rathmann : ). Generally, agreement verbs involve a transfer of an object, whether physical (as with ‘give’ in many languages) or abstract (as with ‘help’ in many languages), but over time, as the formal mechanism of agreement becomes firmly established, more verbs tend to enter into the phenomenon and “the semantic basis for the category becomes more opaque” (Meir : ). Phonological factors of a sign may interfere with how agreement is realized (Mathur and Rathmann : ). In Figure . we see ASL , here moving from the signer to the addressee to indicate ‘I show you’. While there is considerable literature on agreement, the morphological nature of agreement has been challenged. Liddell () argues that when we point to a referent to ‘agree’ with it, we are really identifying it with a gesture. In fact, sometimes we target a particular spot on an addressee (chest, chin, etc.) or on a present or conceptualized third person. This (among other reasons) leads him to claim that verbs are not, in fact, marked for agreement, but that we understand the verb’s arguments via gestural communication; that is, the various entities that have been called ‘subject’ and ‘object’ in analyses of ‘agreement’ bear little to no resemblance to syntactic counterparts in spoken language. De Beuzeville, Johnston, and Schembri () analyze fifty narratives in AustralianSL and show that ‘agreement’ is often not found when expected if the entity referred to is not present (cannot be pointed to), and, when found, it occurs with verbs that are more spatially iconic than with verbs for which the spatial parameters are abstract. This behavior supports Liddell’s account. On the other hand, Lillo-Martin and Meier () argue that directionality of movement marks person. They show that the ways person marking interacts with syntax (word order, null arguments, behavior of auxiliaries, as in BrazilianSL) are expected if sign languages have agreement systems. However, they concede that the evidence argues for a first person distinct from non-first person, but not for a distinction between second and third person. Rathmann and Mathur (), looking at GermanSL, AustralianSL, RussianSL, JapaneseSL, and ASL, concur that non-first person marking must interact with gesture (and see Mathur and Rathmann ). This sets sign languages apart typologically from most spoken languages. Still, Cysouw () did a survey of spoken language agreement systems, and found thirty-one languages which had only a first person/non-first person distinction in the singular and forty-two with only that distinction in the plural.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

    



. C

.................................................................................................................................. This chapter touches on the fact that sign language morphology adds new considerations to debates over category identification, inflection vs. derivation, the notions of ideophones and subject, and properties used in lexical classifications. Additionally, it indicates four major theoretical points. First, sometimes meaningful phonetic information in signs can be considered iconic. Other associations of meaning with phonological parameters are not iconic, at least not synchronically. Both kinds of information allow for networks in the lexicon. If morphological theory is to account for such data, it must allow links between particular phonological parameters in different lexical items, thus ion-morphs must be part of morphological theory. Second, related to the reality of ion-morphs is the possibility (probability?) that the notion ‘root’ plays (little to) no role in sign language morphology, so it should not be taken as a universal. Third, certain phenomena are open to analysis as part of a system of visual representation that we otherwise need in communication, and may be gestural (see the works of Scott Liddell from  on, particularly ), including agreement and classifier constructions (see Mathur and Rathmann , in particular). If this approach is shown to be correct, the grammar of sign languages covers a more narrow range of phenomena than that of spoken languages. So modality determines at least partially the job of a language’s morphology. Fourth, the prevalence of simultaneity (verticality) over linearity (horizontal temporality) in sign language morphology in contrast to the opposite prevalence in spoken language shows that linguistic analysis must include the study of physical properties (visual vs. auditory) if we are to understand language typology. The effects of biomechanics on the lexicon underscore this point. This is not a call for a change in morphological theory, but rather for an augmentation of the material linguists consider as they do language analysis.

A For comments on an earlier draft, I thank Susan Fischer, Scott Liddell, Gaurav Mathur, Wendy Sandler, and Rachel Sutton-Spence, as well as our formidable editors, Jenny Audring and Francesca Masini.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

R ..................................... Aarons, Debra, Benjamin Bahan, Judy Kegl, & Carol Neidle. . Clausal structure and a tier for grammatical marking in American Sign Language. Nordic Journal of Linguistics (). –. Abercrombie, David. . Forgotten phoneticians. Transactions of the Philological Society (). –. Abrahamsson, Niclas. . Development and recovery of L codas. Studies in Second Language Acquisition (). –. Ackema, Peter. . Syntax below zero. OTS publications, University of Utrecht. Ackema, Peter & Ad Neeleman. . Beyond morphology: Interface conditions on word formation. Oxford: Oxford University Press. Ackerman, Farrell. . Miscreant morphemes: Phrasal predicates in Ugric. Berkeley, CA: University of California-Berkeley PhD dissertation. Ackerman, Farrell. . Lexical constructions: Paradigms and periphrastic expression. Manuscript, University of California at San Diego. Ackerman, Farrell. . Lexical derivation and multi-word predicate formation in Hungarian. Acta Linguistica Hungarica (/). –. Ackerman, Farrell, James Blevins, & Robert Malouf. . Parts and wholes: Implicative patterns in inflectional paradigms. In James P. Blevins & Juliette Blevins (eds.), Analogy in grammar: Form and acquisition, –. Oxford: Oxford University Press. Ackerman, Farrell & Phil LeSourd. . Toward a lexical representation of phrasal predicates. In Alex Alsina, Joan Bresnan, & Peter Sells (eds.), Complex predicates, –. Stanford, CA: CSLI. Ackerman, Farrell & Robert Malouf. . Morphological organization: The low conditional entropy conjecture. Language (). –. Ackerman, Farrell & Robert Malouf. . The No Blur Principle effects as an emergent property of language systems. In Anna E. Jurgensen, Hannah Sande, Spencer Lamoureux, Kenny Baclawski, & Alison Zerbe (eds.), Proceedings of the st Annual Meeting of the Berkeley Linguistics Society, –. Berkeley, CA: Berkeley Linguistics Society. Ackerman, Farrell & Gregory Stump. . Paradigms and periphrastic expression: A study in realization-based lexicalism. In Louisa Sadler & Andrew Spencer (eds.), Projecting morphology, –. Stanford, CA: CSLI. Ackerman, Farrell, Gregory Stump, & Gert Webelhuth. . Lexicalism, periphrasis and implicative morphology. In Robert D. Borsley & Kersti Börjars (eds.), Non-transformational syntax: Formal and explicit models of grammar, –. Oxford: Wiley-Blackwell. Ackerman, Farrell & Gert Webelhuth. . The composition of (dis)continuous predicates: Lexical or syntactic? Acta Linguistica Hungarica (/). –. Ackerman, Farrell & Gert Webelhuth. . A theory of predicates. Stanford, CA: CSLI. Acquaviva, Paolo. . Roots and lexicality in Distributed Morphology. In Alexandra Galani, Daniel Redinger, & Norman Yeo (eds.), York-Essex Morphology Meeting , –. York: York University. Adam, Robert. . Language contact and borrowing. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Sign language: An international handbook. vol. , –. Berlin: De Gruyter Mouton. Adams, Valerie. . An introduction to modern English word-formation. London: Longman. Adger, David & Peter Svenonius. . Features in Minimalist syntax. In Cedric Boeckx (ed.), Oxford handbook of linguistic minimalism, –. Oxford: Oxford University Press. Ahlberg, Malin, Markus Forsberg, & Mans Hulden. . Semi-supervised learning of morphological paradigms and lexicons. In Proceedings of the th Conference of the European Chapter of the

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Association for Computational Linguistics, –. Gothenburg: Association for Computational Linguistics. Aho, Alfred V., Ravi Sethi, & Jeffrey D. Ullman. . Compilers: Principles, techniques and tools. Reading, MA: Addison-Wesley. Akinlabi, Akinbiyi. . Featural affixation. Journal of Linguistics . –. Akinlabi, Akinbiyi. . Patterns of tonal transfer I. Paper presented at ACAL , Cornell University,  July . Alber, Birgit & Sabine Arndt-Lappe. . Templatic and subtractive truncation. In Jochen Trommer (ed.), The morphology and phonology of exponence, –. Oxford: Oxford University Press. Albright, Adam. . Islands of reliability for regular morphology: Evidence from Italian. Language . –. Albright, Adam. . Morpheme position. In Paul de Lacy (ed.), The Cambridge handbook of phonology, –. Cambridge: Cambridge University Press. Albright, Adam & Bruce Hayes. . Modeling English past tense intuitions with minimal generalization. In Michael Maxwell (ed.), Proceedings of the  Workshop on Morphological Learning, Association for Computational Linguistics, –. Philadelphia, PA: Association for Computational Linguistics. Alderete, John, Jill Beckman, Laura Benua, Amalia Gnanadesikan, John McCarthy, & Suzanne Urbanczyk. . Reduplication with fixed segmentism. Linguistic Inquiry . –. Alegre, Maria & Peter Gordon. a. Rule-based versus associative processes in Derivational Morphology. Brain and Language . –. Alegre, Maria & Peter Gordon. b. Frequency effects and the representational status of regular inflections. Journal of Memory and Language . –. Alexiadou, Artemis. . Building words. In Daniel Siddiqi & Heidi Harley (eds.), Morphological metatheory, –. Amsterdam/Philadelphia: John Benjamins. Alexiadou, Artemis & Gereon Müller. . Class features as probes. In Asaf Bachrach & Andrew Nevins (eds.), Inflectional identity, –. Oxford: Oxford University Press. Allen, Margaret R. . Morphological investigations. Storrs: University of Connecticut PhD dissertation. Alpatov, Vladimir M. . Ob utočnenii ponjatij ‘flektivnyj jazyk’ i ‘aggljutinativnyj jazyk’ [On clarification of the notions ‘flective language’ and ‘agglutinative language’]. In Vadim M. Solncev & Igor’ F. Vardul’ (eds.), Lingvističeskaja tipologija [Linguistic Typology]. Moscow: Nauka, –. Alsina, Alex & Sam A. Mchombo. . Object asymmetries and the Chicheŵa applicative construction. In Sam A. Mchombo (ed.), Theoretical aspects of Bantu grammar, –. Stanford, CA: CSLI. Altmann, Gerry. . The language machine: Psycholinguistics in review. British Journal of Psychology . –. Álvarez, Carlos J., Mabel Urrutia, Alberto Domínguez, & Rosa Sánchez-Casas. . Processing inflectional and derivational morphology: Electrophysiological evidence from Spanish. Neuroscience Letters (). –. Ambridge, Ben & Elena Lieven. . Child language acquisition: Contrasting theoretical approaches. Cambridge, MA: Cambridge University Press. Anastasiadis-Symeonidis, Anna. . Neologikos Danismos tis Neoellinikis [Neological borrowing in Modern Greek]. Thessaloniki: Institute for Modern Greek Studies. Andersen, Henning. . Naturalness and markedness. In Klaas Willems & Ludovic De Cuypere (eds.), Naturalness and iconicity in language, –. Amsterdam/Philadelphia: John Benjamins. Andersen, Roger W. (). An implicational model for second language research. Language Learning . –. Andersen, Torben. . Vowel quality alternation in Dinka verb inflection. Phonology . –. Andersen, Torben. . Morphological stratification in Dinka: On the alternations of voice quality, vowel length and tone in the morphology of transitive verbal roots in a monosyllabic language. Studies in African Linguistics (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Andersen, Torben. . Case inflection and nominal head marking in Dinka. Journal of African Languages and Linguistics . –. Anderson, Lloyd. . Universals of aspect and parts of speech: Parallels between signed and spoken languages. In Paul J. Hopper (ed.), Tense and aspect: Between semantics and pragmatics, –. Amsterdam/Philadelphia: John Benjamins. Anderson, Stephen R. . West Scandinavian vowel systems and the ordering of phonological rules. Boston, MA: MIT PhD dissertation. Anderson, Stephen R. a. Comments on Wasow: The role of the Theme in lexical rules. In Peter Culicover, Thomas Wasow, & Adrian Akmajian (eds.), Formal syntax, –. New York: Academic Press. Anderson, Stephen R. b. On the formal description of inflection. Proceedings of the Chicago Linguistic Society . –. Anderson, Stephen R. . Where’s morphology? Linguistic Inquiry (). –. Anderson, Stephen R. . Phonology in the twentieth century. Chicago: University of Chicago Press. Anderson, Stephen R. . Disjunctive ordering in inflectional morphology. Natural Language and Linguistic Theory . –. Anderson, Stephen R. . Morphological theory. In Frederick J. Newmeyer (ed.), Linguistics: The Cambridge survey, vol. , –. Cambridge: Cambridge University Press. Anderson, Stephen R. . Sapir’s approach to typology and current issues in morphology. In Wolfgang U. Dressler, Hans C. Luschützky, Oskar E. Pfeiffer, & John R. Rennison (eds.), Contemporary morphology, –. Berlin: De Gruyter Mouton. Anderson, Stephen R. . A-morphous morphology. Cambridge: Cambridge University Press. Anderson, Stephen R. . Wackernagel’s revenge: Clitics, morphology, and the syntax of second position. Language (). –. Anderson, Stephen R. . Aspects of the theory of clitics. Oxford: Oxford University Press. Anderson, Stephen R. . Morphological change. In Claire Bowern & Bethwyn Evans (eds.), The Routledge handbook of historical linguistics, –. London, New York: Routledge. Anderson, Stephen R. a. Dimensions of morphological complexity. In Matthew Baerman, Dunstan Brown, & Greville G. Corbett (eds.), Understanding and measuring morphological complexity, –. Oxford: Oxford University Press. Anderson, Stephen R. b. The morpheme: Its nature and use. In Matthew Baerman (ed.), The Oxford handbook of inflection, –. Oxford: Oxford University Press. Anderson, Stephen R. . The role of morphology in Transformational Grammar. In Andrew Hippisley & Gregory Stump (eds.), The Cambridge handbook of morphology, –. Cambridge: Cambridge University Press. Anderson, Stephen R. & Louis de Saussure (eds.). . René de Saussure and the theory of word formation. (Classics in Linguistics ). Berlin: Language Science Press. Andreou, Marios. . On headedness in word formation. Patras: University of Patras PhD dissertation. Andresen, Julie Tetel. . Linguistics in America –: A Critical History. New York: Routledge. Andrews, Avery. . The representation of case in Modern Icelandic. In Joan Bresnan (ed.), The mental representation of grammatical relations, –. Cambridge, MA: MIT Press. Andrews, Avery. . Unification and morphological blocking. Natural Language and Linguistic Theory (). –. Andrews, Avery. . Semantic case-stacking and inside-out unification. Australian Journal of Linguistics (). –. Andrews, Avery. . F-structural spellout in LFG morphology. Manuscript, Australian National University. Andrews, Edna. . Markedness theory: The union of asymmetry semiosis in language. Durham, NC: Duke University Press. Andrews, Sally. . Morphological influences on lexical access: Lexical or nonlexical effects? Journal of Memory and Language . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Andrews, Sally, Brett Miller, & Keith Rayner. . Eye movements and morphological segmentation of compound words: There is a mouse in mousetrap. European Journal of Cognitive Psychology . –. Anshen, Frank & Mark Aronoff. . Producing morphologically complex words. Linguistics . –. Anttila, Raimo. . Analogy. The Hague: Mouton. Antworth, Evan. . PC-KIMMO: A two-level processor for morphological analysis. Occasional Publications in Academic Computing . Dallas, TX: Summer Institute of Linguistics. Antzakas, Klimis & Bencie Woll. . Head movements and negation in Greek Sign Language. In Eleni Efthimiou, Georgios Kouroupetroglou, and Stavroula-Evita Fotinea (eds.), International Gesture Workshop, –. Berlin, Heidelberg: Springer. Arad, Maya. . Locality constraints on the interpretation of roots: the case of Hebrew denominal verbs. Natural Language and Linguistic Theory . –. Arad, Maya & Ur Shlonsky. . Roots and patterns. Hebrew morpho-syntax. Dordrecht: Springer. Archangeli, Diana. . Aspects of underspecification theory. Phonology . –. Archangeli, Diana. . Introducing Optimality Theory. Annual Review of Anthropology . –. Archangeli, Diana & D. Terence Langendoen (eds.). . Optimality Theory: An overview. Malden, MA: Blackwell Publishers. Arcodia, Giorgio Francesco. . Constructions and headedness in derivation and compounding. Morphology (). –. Arduino, Lisa S., Cristina Burani, & Giuseppe Vallar. . Lexical effects in left neglect dyslexia: A study in Italian patients. Cognitive Neuropsychology . –. Aristar, Anthony R. . Marking and hierarchy types and the grammaticalization of case-markers. Studies in Language . –. Arkadiev, Peter M. . Multiple ergatives: From allomorphy to differential agent marking. Studies in Language (). –. Arkhangelskiy, Timofey & Yury Lander. . Some challenges of the West Circassian polysynthetic corpus. Working Papers of the Higher School of Economics. Series: Linguistics. No. . Aronoff, Mark. . Word formation in generative grammar. Cambridge, MA: MIT Press. Aronoff, Mark. . Noun classes in Arapesh. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Aronoff, Mark. . Morphology by itself: Stems and inflectional classes. Cambridge, MA: MIT Press. Aronoff, Mark. a. Generative grammar. In Geert Booij, Christian Lehmann, & Joachim Mugdan (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Aronoff, Mark. b. Morphology between lexicon and grammar. In Geert Booij, Christian Lehmann, & Joachim Mugdan (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Aronoff, Mark. . In the beginning was the word. Language (). –. Aronoff, Mark. . Morphological stems: What William of Ockham really said. Word Structure (). –. Aronoff, Mark. . Face the facts: Reading Chomsky’s Remarks on nominalization after forty years. In Florence Villoing, Sophie David, & Sarah Leroy (eds.), Foisonnements morphologiques. Etudes en hommage à Françoise Kerleroux, –. Nanterre: Presses Universitaires de Paris Ouest. Aronoff, Mark. . Thoughts on morphological and cultural evolution. In Laurie Bauer, Lívia Körtvélyessy, & Pavol Štekauer (eds.), Semantics of complex words, –. Dordrecht: Springer. Aronoff, Mark. a. Competition and the lexicon. In Annibale Elia, Claudio Iacobini, & Miriam Voghera (eds.), Livelli di analisi e fenomeni di interfaccia. Atti del XLVII congresso internazionale della Società di Linguistica Italiana, –. Roma: Bulzoni.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Aronoff, Mark. b. A fox knows many things but a hedgehog one big thing. In Andrew Hippisley & Gregory Stump (eds.), The Cambridge handbook of morphology, –. Cambridge: Cambridge University Press. Aronoff, Mark & Frank Anshen. . Morphological productivity and phonological transparency. Canadian Journal of Linguistics . –. Aronoff, Mark & Frank Anshen. . Morphology and the lexicon: Lexicalization and productivity. In Andrew Spencer & Arnold M. Zwicky (eds.), The handbook of morphology, –. Oxford: Blackwell. Aronoff, Mark & Nana Fuhrhop. . Restricting suffix combinations in German and English: closing suffixes and the monosuffix constraint. Natural Language and Linguistic Theory . –. Aronoff, Mark, Irit Meir, Carol Padden, & Wendy Sandler. . Morphological universals and the sign language type. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Aronoff, Mark, Irit Meir, & Wendy Sandler. . The paradox of sign language morphology. Language (). –. Aronoff, Mark & Shikaripur Sridhar. . Morphological levels in English and Kannada; or Atarizing Reagan. In John F. Richardson, Mitchell Marks, & Amy Chukerman (eds.), Papers from the Parasession on the Interplay of Phonology, Morphology, and Syntax, –. Chicago: Chicago Linguistics Society. Aronoff, Mark & Shikaripur Sridhar. . Morphological levels in English and Kannada. In Edmund Gussmann (ed.), Rules and the lexicon, –. Lublin: Catholic University. Arregi, Karlos & Andrew Nevins. . Morphotactics: Basque auxiliaries and the structure of spellout. Dordrecht: Springer. Ashton, E. O. . Swahili grammar. Essex: Longman. Assink, Egbert M. H. & Dominiek Sandra (eds.). . Reading complex words: Cross-language studies. New York: Kluwer. Asudeh, Ash, Gianluca Giorgolo, & Ida Toivonen. . Meaning and valency. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of the LFG Conference, –. Stanford, CA: CSLI. Asudeh, Ash & Ewan Klein. . Shape conditions and phonological context. In Frank Van Eynde, Lars Hellan, & Dorothee Beermann (eds.), Proceedings of the th International HPSG Conference, –. Stanford, CA: CSLI. Audring, Jenny. . Calibrating complexity: How complex is a gender system? Language Sciences .–. DOI: ./j.langsci... Audring, Jenny & Geert Booij. . Cooperation and coercion. Linguistics (). –. Audring, Jenny, Geert Booij, & Ray Jackendoff. . Menscheln, kibbelen, sparkle: Verbal diminutives between grammar and lexicon. In Bert Le Bruyn & Sander Lestrade (eds.), Linguistics in the Netherlands . Amsterdam/Philadelphia: John Benjamins. Baayen, Harald. . A corpus-based approach to morphological productivity. Amsterdam: Free University of Amsterdam PhD dissertation. Baayen, Harald. . Quantitative aspects of morphological productivity. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Baayen, Harald. . On frequency, transparency and productivity. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Baayen, Harald. . Storage and computation in the mental lexicon. In Gonia Jarema & Gary Libben (eds.), The mental lexicon: Core perspectives, –. Amsterdam: Elsevier. Baayen, Harald. . Analyzing linguistic data. A practical introduction to statistics using R. Cambridge: Cambridge University Press. Baayen, Harald. . Corpus linguistics in morphology: Morphological productivity. In Anke Lüdeling & Merja Kytö (eds.), Corpus linguistics: An international handbook, vol. , –. Berlin: De Gruyter Mouton. Baayen, Harald. . Experimental and psycholinguistic approaches. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Baayen, Harald, Ton Dijkstra, & Robert Schreuder. . Singulars and plurals in Dutch: Evidence for a parallel dual-route model. Journal of Memory and Language (). –. Baayen, Harald & Rochelle Lieber. . Productivity and English derivation: A corpus-based study. Linguistics . –. Baayen, Harald, James M. McQueen, Ton Dijkstra, & Robert Schreuder. . Frequency effects in regular inflectional morphology: Revisiting Dutch plurals. In Harald Baayen & Robert Schreuder (eds.), Morphological structure in language processing, –. Berlin: De Gruyter Mouton. Baayen, Harald, Petar Milin, Dusica Filipović Đurđević, Peter Hendrix, & Marco Marelli. . An amorphous model for morphological processing in visual comprehension based on naive discriminative learning. Psychological Review (). –. Baayen, Harald, Richard Piepenbrock, & Leon Gulikers. . CELEX. Philadephia, PA: Linguistic Data Consortium. Baayen, Harald, Richard Piepenbrock, & Hedderik Van Rijn. . The CELEX lexical data base on CD-ROM. Philadephia, PA: Linguistic Data Consortium. Baayen, Harald & Ingo Plag. . Suffix ordering and morphological processing. Language (). –. Baayen, Harald & Antoinette Renouf. . Chronicling the times: Productive lexical innovations in an English newspaper. Language (). –. Baayen, Harald & Robert Schreuder. . War and peace: Morphemes and full forms in a noninteractive activation parallel dual-route model. Brain and Language . –. Baayen, Harald & Robert Schreuder. . Towards a psycholinguistic computational model for morphological parsing. Philosophical Transactions: Mathematical, Physical and Engineering Sciences . –. Baayen, Harald & Robert Schreuder. . Morphological structure in language processing. Berlin: Mouton. Baayen, Harald, Robert Schreuder, Nivja de Jong, & Andrea Krott. . Dutch inflection: the rules that prove the exception. In Sieb Nooteboom, Fred Weerman, & Frank Wijnen (eds.), Storage and computation in the language faculty, –. Dordrecht: Kluwer. Baayen, Harald, Lee H. Wurm, & Joanna Aycock. . Lexical dynamics for low-frequency complex words: A regression study across tasks and modalities. The Mental Lexicon (). –. Bach, Emmon. . On the relationship between word-grammar and phrase-grammar. Natural Language and Linguistic Theory . –. Baddeley, Alan. . Working memory, thought, and action. Oxford: Oxford University Press. Badecker, William & Alfonso Caramazza. . The analysis of morphological errors in a case of acquired dyslexia. Brain and Language . –. Baerman, Matthew. . Paradigmatic chaos in Nuer. Language (). –. Baerman, Matthew. . Covert systematicity in a distributionally complex system. Journal of Linguistics . –. Baerman, Matthew (ed.). . The Oxford handbook of inflection. Oxford, New York: Oxford University Press. Baerman, Matthew. . Seri verb classes: Morphosyntactic motivation and morphological autonomy. Language (). –. Baerman, Matthew & Dunstan Brown. a. Case syncretism. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World Atlas of Language Structures, –. Oxford: Oxford University Press. Baerman, Matthew & Dunstan Brown. b. Syncretism in verbal person/number marking. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World Atlas of Language Structures, –. Oxford: Oxford University Press. Baerman, Matthew, Dunstan Brown, & Greville G. Corbett. . The syntax–morphology interface: A study of syncretism. Cambridge: Cambridge University Press. Baerman, Matthew, Dunstan Brown, & Greville G. Corbett. . Surrey Typological Database on Defectiveness. University of Surrey. DOI: ./SMG./

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Baerman, Matthew, Dunstan Brown, & Greville G. Corbett (eds.). a. Understanding and measuring morphological complexity. Oxford: Oxford University Press. Baerman, Matthew, Dunstan Brown, & Greville G. Corbett. b. Understanding and measuring morphological complexity: An introduction. In Matthew Baerman, Dunstan Brown, & Greville G. Corbett (eds.), Understanding and measuring morphological complexity. Oxford: Oxford University Press, –. Baerman, Matthew & Greville G. Corbett. . Linguistic typology: Morphology. Linguistic Typology . –. Baerman, Matthew & Greville G. Corbett. . Stem alternations and multiple exponence. Word Structure (). –. Baerman, Matthew, Greville G. Corbett, & Dunstan Brown (eds.). . Defective paradigms: Missing forms and what they tell us. Oxford: Oxford University Press. Baerman, Matthew, Greville G. Corbett, Dunstan Brown, & Andrew Hippisley (eds.). . Deponency and morphological mismatches. Oxford: Oxford University Press. Baeskow, Heike. a. His Lordship’s -ship and the King of Golfdom, Against a purely functional analysis of suffixhood. Word Structure (). –. Baeskow, Heike. b. Derivation in Generative Grammar and Neo-Construction Grammar: A critical evaluation and a new proposal. In Susan Olsen (ed.), New impulses in word-formation, –. Hamburg: Buske. Bailey, Nathalie, Carolyn Madden, & Stephen Krashen. . Is there a ‘natural sequence’ in adult second language learning? Language Learning (). –. Baker, Brett, Kate Horrack, Rachel Nordlinger, & Louisa Sadler. . Putting it all together: Agreement, incorporation, coordination and external possession in Wubuy (Australia). In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG, –. Stanford, CA: CSLI. Baker, Mark C. . The mirror principle and morphosyntactic explanation. Linguistic Inquiry (). –. Baker, Mark C. . Incorporation: A theory of grammatical function changing. Chicago: The University of Chicago Press. Baker, Mark C. . The polysynthesis parameter. Oxford: Oxford University Press. Baker, Mark C. . Lexical categories: Verbs, nouns, and adjectives. Cambridge: Cambridge University Press. Baker, Mark C. . The macroparameter in a microparametric world. In Theresa Biberauer (ed.), The limits of syntactic variation, –. Amsterdam/Philadelphia: John Benjamins. Baker, Philip. . Kreol: A description of Mauritian Creole. Ann Arbor: Karoma. Bakker, Dik & Anna Siewierska. . The distribution of subject and object agreement and word order type. Studies in Language (). –. Bakker, Peter. . The absence of reduplication in pidgins. In Silvia Kouwenberg (ed.), Twice as meaningful. Reduplication in pidgins, creoles and other contact languages, –. London: Battlebridge. Baldwin, Timothy & Su Nam Kim. . Multiword expressions. In Nitin Indurkhya & Fred J. Damerau (eds.), Handbook of natural language processing, –. Boca Raton, FL: CRC Press. Barðdal, Jóhanna. . Productivity: Evidence from case and argument structure in Icelandic. Amsterdam/Philadelphia: John Benjamins. Barðdal, Jóhanna, Elena Smirnova, Lotte Sommerer, & Spike Gildea (eds.). . Diachronic Construction Grammar. Amsterdam/Philadelphia: John Benjamins. Barlow, Michael & Suzanne Kemmer (eds.). . Usage-based models of language. Stanford, CA: CSLI. Baroni, Marco, Emiliano Guevara, & Roberto Zamparelli. . The dual nature of Deverbal Nominal Constructions: Evidence from acceptability ratings and corpus analysis. Corpus Linguistics and Linguistic Theory (). –. Baroni, Marco & Alessandro Lenci. . Distributional memory: A general framework for corpusbased semantics. Computational Linguistics (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Barr, Robin Craig. . A lexical model of morphological change. Harvard: Harvard University PhD dissertation. Barsalou, Laurence. . Frames, concepts and conceptual fields. In Eva Kitay & Adrienne Lehrer (eds.), Frames, fields, and contrasts: New essays in semantic and lexical organization, –. Hillsdale, NJ: Erlbaum. Bates, Elizabeth. . Modularity, domain specificity and the development of language. Discussions in Neuroscience (–). –. Battison, Robin. . Lexical borrowing in American Sign Language. Silver Spring, MD: Lindstok. Bauer, Laurie. . English word-formation. Cambridge: Cambridge University Press. Bauer, Laurie. . Introducing linguistic morphology. Edinburgh: Edinburgh University Press. Bauer, Laurie. . Be-heading the word. Journal of Linguistics (). –. Bauer, Laurie. . Derivational paradigms. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Bauer, Laurie. . Morphological productivity. Cambridge: Cambridge University Press. Bauer, Laurie. . What you can do with derivational morphology. In Sabrina Bendjaballah, Wolfgang U. Dressler, Oskar E. Pfeiffer, & Maria D. Voeikova (eds.), Morphology : Selected papers from the th Morphology Meeting, Vienna, – February , –. Amsterdam/ Philadelphia: John Benjamins. Bauer, Laurie. . Introducing linguistic morphology. nd edn. Edinburgh: Edinburgh University Press. Bauer, Laurie. a. A glossary of morphology. Edinburgh: Edinburgh University Press. Bauer, Laurie. b. The function of word-formation and the inflection-derivation distinction. In Henk Aertsen, Mike Hannay, & Rod Lyall (eds.), Words in their places. A festschrift for J. Lachlan Mackenzie, –. Amsterdam: Vrije Universiteit. Bauer, Laurie. . Exocentric compounds. Morphology . –. Bauer, Laurie. . Typology of compounds. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Bauer, Laurie, Rochelle Lieber, & Ingo Plag. . The Oxford reference guide to English morphology. Oxford: Oxford University Press. Beard, Robert. . Lexeme-morpheme base morphology: A general theory of inflection and word formation. Albany, NY: State University of New York Press. Beauvillain Cécille. . The integration of morphological and whole-word information during eye fixations on prefixed and suffixed words. Journal of Memory and Language . –. Beesley, Kenneth R. & Lauri Karttunen. . Finite-state morphology. Stanford, CA: CSLI. Bellugi, Ursula & Susan Fischer. . A comparison of sign language and spoken language. Cognition (). –. Bellugi, Ursula & Edward Klima. . Two faces of sign: Iconic and abstract. In Stevan R. Harnad, Horst D. Steklis, & Jane Lancaster (eds.), Origins and evolution of language and speech, –. New York, NY: New York Academy of Sciences. Bender, Emily, Scott Drellishak, Antske Fokkens, Laurie Poulson, & Safiyyah Saleem. . Grammar customization. Research on Language and Computation (). –. Benedicto, Elena & Diane Brentari. . Where did all the arguments go? Argument changing properties of classifiers in American Sign Language. Natural Language and Linguistic Theory . –. Bengio, Yoshua, Réjean Ducharme, Pascal Vincent, & Christian Janvin. . A neural probabilistic language model. Journal of Machine Learning Research . –. Benigni, Valentina & Francesca Masini. . Nomi sintagmatici in russo. Studi slavistici . –. Benitez-Quiroz, C. Fabian, Kadir Gökgöz, Ronnie B. Wilbur, & Aleix M. Martinez. . Discriminant features and temporal structure of nonmanuals in American Sign Language. PloS ONE () e.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Bentin, Shlomo & Laurie B. Feldman. . The contribution of morphological and semantic relatedness to repetition priming at short and long lags: Evidence from Hebrew. The Quarterly Journal of Experimental Psychology A: Human Experimental Psychology . –. Benua, Laura. . Identity effects in morphological truncation. In Jill Beckman, Laura WalshDickey, & Suzanne Urbanczyk (eds.), University of Massachusetts Occasional Papers in Linguistics : Papers in Optimality Theory, –. Boston, MA: University of Massachusetts. Benveniste, Émile. . Structure des relations de personne dans le verbe. Bulletin de la Société de Linguistique de Paris . –. Beretta, Alan, Carrie Campbell, Thomas H. Carr, Jie Huang, Lothar M. Schmitt, Kiel Christianson, & Yue Cao. . An ER-fMRI investigation of morphological inflection in German reveals that the brain makes a distinction between regular and irregular forms. Brain and Language . –. Berger, Adam, Vincent Della Pietra, & Stephan Della Pietra. . A maximum entropy approach to natural language processing. Computational Linguistics (). –. Bergman, Brita. . Verbs and adjectives: Morphological processes in Swedish Sign Language. In Jim Kyle & Bencie Woll (eds.), Language in sign: An international perspective on sign language, –. London: Croom Helm. Bergman, Brita & Östen Dahl. . Ideophones in sign languages? The place of reduplication in the tense-aspect system of Swedish Sign Language. In Carl Bache, Hans Basboll, and Carl-Erik Lindberg (eds.), Tense, aspect and action: Emripical and theoretical contributions to language typology, –. Berlin: De Gruyter Mouton. Bergs, Alexander & Gabriele Diewald (eds.). . Constructions and language change. Berlin: De Gruyter Mouton. Berko, Jean. . The child’s learning of English Morphology. Word . –. Bermúdez-Otero, Ricardo. . The architecture of grammar and the division of labour in exponence. In Jochen Trommer (ed.), The morphology and phonology of exponence, –. Oxford: Oxford University Press. Bermúdez-Otero, Ricardo. . The Spanish lexicon stores stems with theme vowels, not roots with inflectional class features. Probus (). –. Bermúdez-Otero, Ricardo. . We do not need structuralist morphemes, but we do need constituent structure. In Daniel Siddiqi & Heidi Harley (eds.), Morphological metatheory. Amsterdam/ Philadelphia: John Benjamins. Bertram, Raymond, Matti Laine, Harald Baayen, Robert Schreuder, & Jukka Hyönä. . Affixal homonymy triggers full-form storage, even with inflected words, even in a morphologically rich language. Cognition . B–B. Bertram, Raymond, Robert Schreuder, & Harald Baayen. . The balance of storage and computation in morphological processing: The role of word formation type, affixal homonymy, and productivity. Journal of Experimental Psychology: Learning, Memory, and Cognition . –. Beuzeville, Louise de, Trevor A. Johnston, & Adam Schembri. . The use of space with indicating verbs in Auslan: A corpus-based investigation. Sign Language & Linguistics (). –. Beyersmann, Elisabeth, Anne Castles, & Max Coltheart. . Early morphological decomposition during visual word recognition: Evidence from masked transposed-letter priming. Psychonomic Bulletin & Review . –. Beyersmann, Elisabeth, Samantha F. McCormick, & Kathleen Rastle. . Letter transpositions within morphemes and across morpheme boundaries. Quarterly Journal of Experimental Psychology (). –. Bhatt, Parth & Ingo Plag (eds.). . The structure of creole words: Segmental, syllabic and morphological aspects. Berlin: Mouton de Gruyter. Bickel, Balthasar. . Typology in the st century: Major current developments. Linguistic Typology . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Bickel, Balthasar. . Capturing particulars and universals in clause linkages: A multivariate analysis. In Isabelle Bril (ed.), Clause-hierarchy and clause-linking: The syntax and pragmatics interface, –. Amsterdam/Philadelphia: John Benjamins. Bickel, Balthasar. . Multivariate typology and field linguistics: A case study on detransitivization in Kiranti (Sino-Tibetan). In Peter K. Austin, Oliver Bond, David Nathan, & Lutz Marten (eds.), Proceedings of Conference on Language Documentation and Linguistic Theory , –. London: School of Oriental and African Studies. Bickel, Balthasar & Johanna Nichols. a. Locus of marking: Whole-language typology. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World Atlas of Language Structures, –. Oxford: Oxford University Press. Bickel, Balthasar & Johanna Nichols. b. Inflectional synthesis of the verb. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World Atlas of Language Structures, –. Oxford: Oxford University Press. Bickel, Balthasar & Johanna Nichols. . Inflectional morphology. In Timothy Shopen (ed.), Language typology and syntactic description. vol. : Grammatical categories and the lexicon, nd edn, –. Cambridge: Cambridge University Press. Bickel, Balthasar, Goma Banjade, Martin Gaenszle, Elena Lieven, Netra Prasad Paudyal, Ichchha Purna Rai, Manoj Rai, Novel Kishore Rai, & Sabine Stoll. . Free prefix ordering in Chintang. Language (). –. Bickel, Balthasar & Fernando Zúñiga. . The ‘word’ in polysynthetic languages: phonological and syntactic challenges. In Michael Fortescue, Nicholas Evans, & Marianne Mithun (eds.), The Oxford handbook of polysynthesis, –. Oxford: Oxford University Press. Bien, Heidrun, Willem J. M. Levelt, & R. Harald Baayen. . Frequency effects in compound production. Proceedings of the National Academy of Sciences . –. Bildhauer, Felix. . Clitic left dislocation and focus projection in Spanish. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, –. Stanford, CA: CSLI. Bisang, Walter. . Variation and reproducibility in linguistics. In Peter Siemund (ed.), Linguistic Universals and Language Variation, –. Berlin: De Gruyter Mouton. Bisetto, Antonietta & Sergio Scalise. . The classification of compounds, Lingue e Linguaggio . –. Bishop, Dorothy V. M. . Why is it so hard to reach agreement on terminology? The case of developmental language disorder (DLD). International Journal of Language & Communication Disorders (). –. Bittner, Andreas. . Starke ‘schwache’ Verben—schwache ‘starke’ Verben: deutsche Verbflexion und Natürlichkeit. Tübingen: Stauffenburg. Bittner, Dagmar. . Von starken Feminina und schwachen Maskulina. Die neuhochdeutsche Substantivflexion—Eine Systemanalyse im Rahmen der natürlichen Morphologie. Jena: FriedrichSchiller-Universität Jena PhD dissertation [appeared in ZAS Papers in Linguistics , ]. Bittner, Dagmar, Wolfgang U. Dressler, & Marianne Kilani-Schoch. . Development of inflection in first language acquisition: A cross-linguistic perspective. Berlin: De Gruyter Mouton. Blevins, James P. . Realisation-based Lexicalism. Journal of Linguistics . –. Blevins, James P. . Stems and paradigms. Language (). –. Blevins, James P. . Inflection classes and economy. In Lutz Gunkel, Gereon Müller, & Gisela Zifonun (eds.), Explorations in nominal inflection, –. Berlin: De Gruyter Mouton. Blevins, James P. . Word-based morphology. Journal of Linguistics (). –. Blevins, James P. . The post-transformational enterprise. Journal of Linguistics (). –. Blevins, James P. . Word-based morphology from Aristotle to modern WP (Word and paradigm models). In Keith Allan (ed.), The Oxford handbook of the history of linguistics, –. Oxford: Oxford University Press. Blevins, James P. . The morphology of words. In Matthew Goldrick, Victor Ferreira, & Michele Miozzo (eds.), The Oxford handbook of language production, –. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Blevins, James P. . Inflectional paradigms. In Matthew Baerman (ed), The Oxford handbook of inflection, –. Oxford: Oxford University Press. Blevins, James P. . Word and paradigm morphology. Oxford: Oxford University Press. Blevins, James P., Petar Milin, & Michael Ramscar. . The Zipfian Paradigm Cell Filling Problem. In Ferenc Kiefer, James P. Blevins, & Huba Bartos (eds.), Perspectives on morphological structure: Data and analyses. Leiden: Brill. Blevins, Juliette. . Mokilese reduplication. Linguistic Inquiry . –. Bloch, Bernard. . English verb inflection. Language (). –. Bloch, Bernard & George L. Trager. . Outlines of linguistic analysis. Baltimore, MD: Linguistic Society of America. Blom, Elma. . Modality, infinitives and finite bare verbs in Dutch and English child language. Language Acquisition (). –. Blom, Elma & Johanne Paradis. . Past tense production by English second language learners with and without language impairment. Journal of Speech, Language and Hearing Research . –. Blom, Elma, Johanne Paradis, & Tamara Sorenson Duncan. . Effects of input properties, vocabulary size, and L on the development of third person singular ‑s in child L English. Language Learning (). –. Blom, Elma & Frank Wijnen. . Optionality of finiteness: evidence for a no overlap stage in Dutch child language. First Language (). –. Bloomfield, Leonard. a. An introduction to the study of language. New York: Holt. Bloomfield, Leonard. b. Sentence and word. Transactions of the American Philological Society . –. Bloomfield, Leonard. . A set of postulates for the science of language. Language . –. Bloomfield, Leonard. . Language. New York: Holt, Rinehart, and Winston. Bloomfield, Leonard. . Language (Reprint). Chicago: The University of Chicago Press [original edition: New York: Holt, Rinehart, and Winston, ]. Blust, Robert A. . The phonestheme n- in Austronesian languages. Oceanic Linguistics (). –. Boas, Franz. . Handbook of American Indian languages. Part . Washington, DC: Government Printing Office. Boas, Hans & Ivan Sag (eds.). . Sign-based Construction Grammar. Stanford, CA: CSLI. Bobaljik, Jonathan D. . The ins and outs of contextual allomorphy. In Kleanthes Grohmann & Caro Struijke (eds.), University of Maryland Working Papers in Linguistics , –. College Park: University of Maryland, Department of Linguistics. Bobaljik, Jonathan D. . Paradigms, optimal and otherwise: A case for skepticism. In Asef Bachrach & Andrew I. Nevins (eds.), Inflectional identity, –. Oxford: Oxford University Press. Bobaljik, Jonathan. . Where’s phi? Agreement as a post-syntactic operation. In Daniel Harbour, David Adger, & Susana Béjar (eds.), Phi-theory: Phi features across modules and interfaces, –. Oxford: Oxford University Press. Bobaljik, Jonathan D. . Universals in comparative morphology: Suppletion, superlatives, and the structure of words. Cambridge, MA: MIT Press. Bochner, Harry. . Inflection within derivation. The Linguistic Review . –. Bochner, Harry. . Simplicity in Generative Morphology. Berlin: De Gruyter Mouton. Boeckx, Cedric. . Bare syntax. Oxford: Oxford University Press. Boeckx, Cedric. . Defeating lexicocentrism: Outline of elementary syntactic structures. Manuscript, ICREA/Universitat Autònoma de Barcelona. Bögel, Tina. . Pashto (endo)clitics in a parallel architecture. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG , –. Stanford, CA: CSLI. Bögel, Tina, Miriam Butt, Ronald M. Kaplan, Tracy Holloway King, & John Maxwell III. . Prosodic phonology in LFG: A new proposal. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG , –. Stanford, CA: CSLI. Bonami, Olivier. . Periphrasis as collocation. Morphology . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Bonami, Olivier & Sarah Beniamine. . Implicative structure and joint predictiveness. In Vito Pirelli, Claudia Marzi, & Marcello Ferro (eds.), Word structure and word usage. Proceedings of the NetWordS final conference, –. CEUR Workshop Proceedings. Bonami, Olivier & Gilles Boyé. . Suppletion and dependency in inflectional morphology. In Frank van Eynde, Lars Hellan, & Dorothee Beermann (eds.), Proceedings of the th International HPSG Conference, –. Stanford, CA: CSLI. Bonami, Olivier & Gilles Boyé. . La nature morphologique des allomorphies conditionées. In Bernard Fradin (ed.), Actes du Troisième Forum de Morphologie, –. Lille: UMR SILEX, Université Lille . Bonami, Olivier & Gilles Boyé. . Deriving inflectional irregularity. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, Varna, –. Stanford, CA: CSLI. Bonami, Olivier & Gilles Boyé. . French pronominal clitics and the design of Paradigm Function Morphology. In Geert Booij, Luca Ducceschi, Bernard Fradin, Emiliano Guevara, Angeliki Ralli, & Sergio Scalise (eds.), On-line proceedings of the Fifth Mediterranean Morphology Meeting (MMM) Fréjus, – September , –. Bologna: University of Bologna. Bonami, Olivier & Gilles Boyé. . La morphologie flexionnelle est-elle une fonction? In Injoo Choi-Jonin, Marc Duval, & Olivier Soutet (eds.), Typologie et comparatisme, hommage offert à Alain Lemaréchal, –. Leuven: Peeters. Bonami, Olivier & Berthold Crysmann. . Morphotactics in an information-based model of realisational morphology. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, Freie Universität Berlin, –. Stanford, CA: CSLI. Bonami, Olivier & Berthold Crysmann. . The role of morphology in constraint-based lexicalist grammars. In Andrew Hippisley & Gregory Stump (eds.), The Cambridge handbook of morphology, –. Cambridge: Cambridge University Press. Bonami, Olivier, Fabiola Henri, & Ana Luís. . The emergence of morphomic structure in Romance-based Creoles. Paper presented at the th International Conference on Historical Linguistics, Osaka, July –. Bonami, Olivier & Ana R. Luís. . Sur la morphologie implicative dans la conjugaison du portugais: une étude quantitative. Mémoires de la Société de Linguistique de Paris . –. Bonami, Olivier & Pollet Samvelian. . Sorani Kurdish person markers and the typology of agreement. Paper presented at the th International Morphology Meeting, Vienna, – February . Bonami, Olivier & Pollet Samvelian. . The diversity of inflectional periphrasis in Persian. Journal of Linguistics (). –. Bonami, Olivier & Gregory Stump. . Paradigm Function Morphology. In Andrew Hippisley & Gregory Stump (eds.), The Cambridge handbook of morphology, –. Cambridge: Cambridge University Press. Bond, Oliver. . A base for canonical negation. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Bonet, E. . Morphology after syntax: Pronominal clitics in Romance. Cambridge, MA: MIT PhD dissertation. Bonet, Eulàlia & Daniel Harbour. . Contextual allomorphy. In Jochen Trommer (ed.), The morphology and phonology of exponence, –. Oxford: Oxford University Press. Booij, Geert. . Dutch morphology. A study of word formation in Generative Grammar. Lisse: The Peter De Ridde Press. Booij, Geert. . The boundary between morphology and syntax: Separable complex verbs in Dutch. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Booij, Geert. . Against split morphology. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Booij, Geert. . Inherent versus contextual inflection and the split morphology hypothesis. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Booij, Geert. . Autonomous morphology and paradigmatic relations. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Booij, Geert. . Inflection and derivation. In Geert Booij, Christian Lehman, & Joachim Mugdan (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Booij, Geert. a. Constructional idioms, morphology and the Dutch lexicon. Journal of Germanic Linguistics (). –. Booij, Geert. b. Separable complex verbs in Dutch: A case of periphrastic word formation. In Nicole Dehé, Ray Jackendoff, Andrew Macintyre, & Silke Urban (eds.), Verb-particle explorations, –. Berlin: De Gruyter Mouton. Booij, Geert. c. The morphology of Dutch. Oxford: Oxford University Press. Booij, Geert. . Periphrastic word formation. In Geert Booij, Janet DeCesaris, Angela Ralli, & Sergio Scalise (eds.), Papers from the Third Mediterranean Morphology Meeting, Barcelona, – September , –. Barcelona: IULA Universita Pompeu Fabra. Booij, Geert. . Constructions and the interface between lexicon and syntax. In Henk Aertsen, Mike Hannay, & Gerard Steen (eds.), Words in their place. Festchrift for J.L. Mackenzie, –. Amsterdam: Vrije Universiteit. Booij, Geert. a. Compounding and derivation: Evidence for Construction Morphology. In Wolfgang U. Dressler, Dieter Kastovsky, Oskar Pfeiffer, & Franz Rainer (eds.), Morphology and its demarcations, –. Amsterdam/Philadelphia: John Benjamins. Booij, Geert. b. Morphology and the tripartite parallel architecture of the grammar. In Maria Grossmann & Anna-Maria Thornton (eds.), La formazione delle parole, –. Roma: Bulzoni. Booij, Geert. c. Construction-dependent morphology. Lingue e Linguaggio . –. Booij, Geert. d. The grammar of words. st edn. Oxford: Oxford University Press. Booij, Geert. . Inflection and derivation. In Keith Brown (ed.), Encyclopedia of language and linguistics, nd edn, –. Amsterdam: Elsevier. Booij, Geert. a. Construction morphology and the lexicon. In Fabio Montermini, Gilles Boyé, & Nabil Harbout (eds.), Selected Proceedings of the th Décembrettes. Morphology in Toulouse, –. Somerville: Cascadilla Press. Booij, Geert. b. Polysemy and Construction Morphology. In Fons Moerdijk, Ariane van Santen, & Rob Tempelaars (eds.), Leven met woorden, –. Leiden: Instituut voor Nederlandse Lexicologie. Booij, Geert. a. Constructional idioms as products of linguistic change: the aan het + infinitive construction in Dutch. In Alexander Bergs & Gabriele Diewald (eds.), Constructions and language change, –. Berlin: De Gruyter Mouton. Booij, Geert. b. Paradigmatic morphology. In Bernard Fradin (ed.), La raison morphologique. Hommage á la mémoire de Danielle Corbin, –. Amsterdam/Philadelphia: John Benjamins. Booij, Geert. c. Composition et morphologie des constructions. In Dany Amiot (ed.), La composition dans une perspective typologique, –. Artois: Artois Presses Université. Booij, Geert. a. Phrasal names: A constructionist analysis. Word Structure (). –. Booij, Geert. b. Lexical integrity as a formal universal: A constructionist view. In Sergio Scalise, Elisabetta Magni, & Antonietta Bisetto (eds.), Universals of language today, –. Dordrecht: Springer. Booij, Geert. c. Construction morphology and compounding. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Booij, Geert. d. A constructional analysis of quasi-incorporation in Dutch. Gengo Kenkyu . –. Booij, Geert. a. Construction Morphology. Oxford: Oxford University Press. Booij, Geert. b. Construction morphology. Language and Linguistics Compass (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Booij, Geert. c. Compound construction: Schemas or analogy? A Construction Morphology perspective. In Sergio Scalise & Irene Vogel (eds.), Cross-disciplinary issues in compounding, –. Amsterdam/Philadelphia: John Benjamins. Booij, Geert. . The grammar of words, rd edn. Oxford: Oxford University Press. Booij, Geert. . Morphology in Construction Grammar. In Thomas Hoffmann & Graeme Trousdale (eds.), The Oxford handbook of Construction Grammar, –. Oxford: Oxford University Press. Booij, Geert. . Word formation in Construction Grammar. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton. Booij, Geert. . Construction Morphology. In Andrew Hippisley & Gregory Stump (eds.), The Cambridge handbook of morphology, –. Cambridge: Cambridge University Press. Booij, Geert. a. Construction Morphology. In Oxford Research Encyclopedia of Linguistics. DOI: ./acrefore/... Booij, Geert. b. Inheritance and motivation in Construction Morphology. In Nikolas Gisborne & Andrew Hippisley (eds.), Defaults in morphological theory, –. Oxford: Oxford University Press. Booij, Geert (ed.). . The construction of words. Advances in Construction Morphology. Dordrecht: Springer. Booij, Geert & Jenny Audring. . Construction Morphology and the Parallel Architecture of Grammar. Cognitive Science . –. doi:./cogs.. Booij, Geert & Jenny Audring. . Partial motivation, multiple motivation: The role of output schemas in morphology. In Geert Booij (ed.), The construction of words. Advances in Construction Morphology. Dordrecht: Springer. Booij, Geert & Jenny Audring. To appear. Category change in Construction Morphology. In Kristel Van Goethem, Muriel Norde, Evie Coussé, & Gudrun Vanderbauwhede (eds.), Category change from a constructional perspective. Amsterdam/Philadelphia: John Benjamins. Booij, Geert & Matthias Hüning. . Affixoids and constructional idioms. In Ronny Boogaart, Timothy Colleman, & Gijsbert Rutten (eds.), Extending the scope of Construction Grammar, –. Berlin: De Gruyter Mouton. Booij, Geert & Rochelle Lieber. . On the simultaneity of morphological and prosodic structure. In Sharon Hargus & Ellen Kaisse (eds.), Studies in Lexical Phonology, –. San Diego, CA: Academic Press. Booij, Geert & Jaap van Marle (eds.). . Yearbook of morphology . Dordrecht: Kluwer. Booij, Geert & Francesca Masini. . Semantic perspectives in Construction Morphology. Manuscript, University of Bologna. Booij, Geert & Francesca Masini. . The role of second order schemas in the construction of complex words. In Laurie Bauer, Livia Kőrtvélyessy, & Pavol Štekauer (eds.), Semantics of complex words, –. Dordrecht: Springer. Booij, Geert & Jerzy Rubach. . Postcyclic versus postlexical rules in Lexical Phonology. Linguistic Inquiry . –. Borer, Hagit. a. In name only. Structuring sense, vol. I. Oxford: Oxford University Press. Borer, Hagit. b. The normal course of events. Structuring sense, vol. II. Oxford: Oxford University Press. Borer, Hagit. . Taking form. Structuring sense, vol. III. Oxford: Oxford University Press. Boretzky, Norbert & Peter Auer (eds.). . Spielarten der Natürlichkeit—Spielarten der Ökonomie: Beiträge zum . Essener Kolloquium über “Grammatikalisierung: Natürlichkeit und Systemökonomie” vom ..–.. an der Universität Essen. Bochum: Brockmeyer. Boretzky, Norbert, Wolfgang U. Dressler, Janez Orešnik, Karmen Teržan, & Wolfgang U. Wurzel (eds.). . Natürlichkeitstheorie und Sprachwandel/Teorija naravnosti in jezikovno spreminjanie. Beiträge zum internationalen Symposium über “Natürlichkeitstheorie und Sprachwandel” an der Universität Maribor vom ..–... Bochum: Brockmeyer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Börjars, Kersti, Nigel Vincent, & Carol Chapman. . Paradigms, periphrases and pronominal inflection: A feature-based account. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Borsley, Robert. . Weak auxiliaries, complex verbs and inflected complementizers in Polish. In Robert D. Borsley & Adam Przepiórkowski (eds.), Slavic in Head-Driven Phrase Structure Grammar, –. Stanford, CA: CSLI. Börstell, Carl. . Revisiting reduplication: Toward a description of reduplication in predicative signs in Swedish Sign Language. Stockholm: Stockholms Universitet MA thesis. Botha, Rudolf. . A base rule theory of Afrikaans synthetic compounding. In Michael Moortgat, Harry van der Hulst, & Teun Hoekstra (eds.), The scope of lexical rules, –. Dordrecht: Foris. Botha, Rudolf. . Form and meaning in word formation: A study of Afrikaans reduplication. Cambridge: Cambridge University Press. Bouvet, Danielle. . Le corps et la métaphore dans les langues gestuelles: à la recherche des modes de production des signes. Paris: L’Harmattan. Bowern, Claire. . Diachrony. In Matthew Baerman (ed.), The Oxford handbook of inflection, –. Oxford, New York: Oxford University Press. Boyé, Gilles. . Problèmes de morpho-phonologie verbale en français, en espagnol et en italien. Paris: University of Paris VII PhD dissertation. Boyes Braem, Penny & Rachel Sutton-Spence (eds.). . The hands are the head of the mouth: The mouth as articulator in sign languages. Hamburg: Signum Press. Bozic, Mirjana & William Marslen-Wilson. . Neurocognitive contexts for morphological complexity: dissociating inflection and derivation. Language and Linguistics Compass . –. Bradley, Dianne. . Lexical representation of derivational relation. In Mark Aronoff & MaryLouise Kean (eds.), Juncture, –. Saratoga, CA: Anma Libri. Braun, Maria. . Word-formation and creolisation. The case of Early Sranan. Tübingen: Niemeyer. Braun, Maria & Ingo Plag. . How transparent is creole morphology? A study of early Sranan word-formation. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Brekle, Herbert. . Generative Satzsemantik und transformationelle Syntax im System der englischen Nominalkomposition. Munich: Fink. Brennan, Mary. . Word formation in British Sign Language. Stockholm: Stockholms Universitet. Brennan, Mary. . The visual world of British Sign Language: An introduction. In David Brien (ed.), Dictionary of British Sign Language/English, –. London: Faber and Faber. Brennan, Mary. . Encoding and capturing productive morphology. Sign Language and Linguistics .(/). –. Brentari, Diane. . Modality differences in sign language phonology and morphophonemics. In Richard P. Meier, Kearsy Cormier, & David Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, –. Cambridge: Cambridge University Press. Brentari, Diane & Carol Padden. . Native and foreign vocabulary in American Sign Language: A lexicon with multiple origins. In Diane Brentari (ed.), Foreign vocabulary in sign languages: A crosslinguistic investigation of word formation, –. Mahwah, NJ: Erlbaum. Bresnan, Joan. . A realistic transformational grammar. In Morris Halle, Joan Bresnan, & George Miller (eds.), Linguistic theory and psychological reality, –. Cambridge, MA: MIT Press. Bresnan, Joan (ed.). a. The mental representation of grammatical relations. Cambridge, MA: MIT Press. Bresnan, Joan. b. The passive in lexical theory. In Joan Bresnan (ed.), The mental representation of grammatical relations, –. Cambridge, MA: MIT Press. Bresnan, Joan. . Morphology competes with syntax: Explaining typological variation in weak crossover effects. In Pilar Barbosa, Danny Fox, Paul Hagstrom, Martha McGinnis, & David Pesetsky (eds.), Is the best good enough? Proceedings from the Workshop on Optimality in Syntax, –. Cambridge, MA: MIT Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Bresnan, Joan. . Explaining morphosyntactic competition. In Mark Baltin & Chris Collins (eds.), Handbook of contemporary syntactic theory, –. Oxford: Blackwell Publishers. Bresnan, Joan. a. Lexical functional syntax. Oxford: Blackwell. Bresnan, Joan. b. Optimal syntax. In Joost Dekkers, Frank van der Leeuw, & Jeroen van de Weijer (eds.), Optimality theory: Phonology, syntax and acquisition, –. Oxford: Oxford University Press. Bresnan, Joan, Ash Asudeh, Ida Toivonen, & Stephen Wechsler. . Lexical functional syntax, nd edn. Oxford: Wiley-Blackwell. Bresnan, Joan & Jonni M. Kanerva. . Locative Inversion in Chicheŵa: A case study of factorization in grammar. Linguistic Inquiry . –. Bresnan, Joan & Ronald Kaplan. . Lexical-Functional Grammar: A formal system for grammatical representation. In Joan Bresnan (ed.), The mental representation of grammatical relations, Cambridge, MA: MIT Press. Bresnan, Joan & Sam Mchombo. . Topic, pronoun and agreement in Chicheŵa. Language (). –. Bresnan, Joan & Sam Mchombo. . The Lexical Integrity Principle: Evidence from Bantu. Natural Language and Linguistic Theory (). –. Bresnan, Joan & Lioba Moshi. . Object asymmetries in comparative Bantu syntax. Linguistic Inquiry (). –. Broadwell, George Aaron. . Turkish suspended affixation is lexical sharing. In Miriam Butt & Tracy Holloway King (eds.), The Proceedings of the LFG conference, –. Stanford, CA: CSLI. Broselow, Ellen and John J. McCarthy. /. A theory of internal reduplication. The Linguistic Review . –. Brousseau, Anne-Marie, Sandra Filipovich, & Claire Lefebvre. . Morphological processes in Haitian Creole: The question of substratum and simplification. Journal of Pidgin and Creole Languages (). –. Brown, Dunstan. . Defaults and overrides in morphological description. In Andrew Hippisley & Gregory Stump (eds.), The Cambridge handbook of morphology, –. Cambridge: Cambridge University Press. Brown, Dunstan & Marina Chumakina. . What there is and what there might be: An introduction to Canonical Typology. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Brown, Dunstan, Marina Chumakina, & Greville G. Corbett (eds.). . Canonical morphology and syntax. Oxford: Oxford University Press. Brown, Dunstan, Maria Chumakina, Greville G. Corbett, Gergana Popova, & Andrew Spencer. . Defining ‘periphrasis’: Key notions. Morphology (). –. DOI: ./s--- Brown, Dunstan, Greville G. Corbett, Norman Fraser, Andrew Hippisley, & Alan Timberlake. . Russian noun stress and network morphology. Linguistics . –. Brown, Dunstan & Andrew Hippisley. . Network Morphology: A defaults-based theory of word structure. Cambridge: Cambridge University Press. Brown, Roger. . A first language: The early stages. Cambridge, MA: Harvard University Press. Bruck, Anthony, Robert A. Fox, & Michael W. La Galy (eds.). . Papers from the Parasession on Natural Phonology, Chicago: Chicago Linguistic Society. Brysbaert, Marc & Boris New. . Moving beyond Kucera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods . –. Burani, Cristina & Alfonso Caramazza. . Representation and processing of derived words. Language and Cognitive Processes . –. Burani, Cristina, Dario Salmaso, & Alfonso Caramazza. . Morphological structure and lexical access. Visible Language . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Burzio, Luigi. . Paradigmatic and syntagmatic relations in Italian verbal inflection. In Julie Auger, J. Clancy Clements, and Barbara Vance (eds.), Contemporary approaches to Romance linguistics. –. Amsterdam/Philadelphia: John Benjamins. Butt, Miriam & Tracy Holloway King. . Interfacing phonology with LFG. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of the LFG Conference. Stanford, CA: CSLI. Butt, Miriam, Tracy Holloway King, María-Eugenia Niño, & Frédérique Segond. . A grammar writer’s cookbook. Stanford, CA: CSLI. Butt, Miriam, María-Eugenia Niño, & Frédérique Segond. . Multilingual processing of auxiliaries in LFG. In Dafydd Gibbon (ed.), Natural language processing and speech technology, –. Berlin: De Gruyter Mouton. Butt, Miriam & Louisa Sadler. . Verbal morphology and agreement in Urdu. In Uwe Junghanns & Luka Szucsich (eds.), Syntactic structures and morphological information, –. Berlin: De Gruyter Mouton. Butterworth, Brian. . Lexical representation. In Brian Butterworth (ed.), Language production, vol. , –. San Diego, CA: Academic Press. Butterworth, Brian. . Lexical access in speech production. In William Marslen-Wilson (ed.), Lexical representation and process, –. Cambridge, MA: MIT Press. Bybee, Joan. . Morphology: A study of the relation between meaning and form. Amsterdam/ Philadelphia: John Benjamins. Bybee, Joan. . Regular morphology and the lexicon. Language and Cognitive Processes (). –. Bybee, Joan. . Phonology and language use. Cambridge: Cambridge University Press. Bybee, Joan. . Frequency of use and the organization of language. Oxford: Oxford University Press. Bybee, Joan. . Language, usage and cognition. Cambridge: Cambridge University Press. Bybee, Joan & Paul Hopper (eds.). . Frequency and the emergence of linguistic structure. Amsterdam/Philadelphia: John Benjamins. Bybee, Joan & Carol L. Moder. . Morphological classes as natural categories. Language . –. Bybee, Joan, William Pagliuca, & Revere D. Perkins. . On the asymmetries in the affixation of grammatical material. In William Croft, Suzanne Kemmer, & Keith Denning (eds.), Studies in typology and diachrony. Papers presented to Joseph H. Greenberg on his th birthday, –. Amsterdam/Philadelphia: John Benjamins. Bybee, Joan, Revere Perkins, & William Pagliuca. . The evolution of grammar: Tense, aspect and modality in the languages of the world. Chicago & London: The University of Chicago Press. Bybee, Joan & Dan Slobin. . Rules and schemas in the development and use of the English past tense. Language (). –. Bye, Patrick & Peter Svenonius. . Non-concatenative morphology as epiphenomenon. In Jochen Trommer (ed.), The morphology and phonology of exponence, –. Oxford: Oxford University Press. Caballero, Gabriela & Alice C. Harris. . A working typology of multiple exponence. In Ferenc Kiefer, Mária Ladányi, & Péter Siptár (eds.), Current issues in morphological theory: (Ir)regularity, analogy and frequency. Selected papers from the th International Morphology Meeting, Budapest, – May , –. Amsterdam/Philadelphia: John Benjamins. Caha, Pavel. . The nanosyntax of case. Tromsø: University of Tromsø PhD dissertation. Cahill, Aoife, Michael Burke, Martin Forst, Ruth O’Donovan, Christian Rohrer, Josef van Genabith, & Andy Way. . Treebank-based acquisition of multilingual unification grammar resources. Research on Language and Computation (). –. Calcagno, Mike. . Interpreting lexical rules. Proceedings of the First Conference on Formal Grammar, –. Barcelona. Cameracanna, Emanuela, Serena Corazza, Elena Pizzuto, & Virginia Volterra. . How visual spatial-temporal metaphors of speech become visible in sign. In Inger Ahlgren, Brita Bergman, & Mary Brennan (eds.), Perspectives on sign language structure: Papers from the Fifth International

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Symposium on Sign Language Research. vol. ; Held in Salamanca, Spain, – May , –. Durham, UK: International Sign Linguistics Association. Camilleri, Maris & Phyllisienne Gauci. . Syncretism and its effects within Maltese nominal paradigms. Folia Linguistica (). –. Campbell, Lyle & Richard Janda. . Introduction: conceptions of grammaticalization and their problems. Language Sciences . –. Cappellaro, Chiara. . Overabundance in diachrony: A case study. In Silvio Cruschina, Martin Maiden, & John Charles Smith (eds.), The boundaries of pure morphology: Diachronic and synchronic perspectives, –. Oxford: Oxford University Press. Cappelle, Bert. . Particle patterns in English: A comprehensive coverage. Leuven: Katholieke Universiteit Leuven PhD dissertation. Caramazza, Alfonso. . The logic of neuropsychological research and the problem of patient classification in aphasia. Brain and Language . –. Caramazza, Alfonso. . How many levels of processing are there in lexical access? Cognitive Neuropsychology . –. Caramazza, Alfonso & Argye E. Hillis. . Levels of representation, co-ordinate frames, and unilateral neglect. Cognitive Neuropsychology . –. Caramazza, Alfonso, Alessandro Laudanna, & Cristina Romani. . Lexical access and inflectional morphology. Cognition . –. Cardoso, Hugo C. . The Indo-Portuguese language of Diu. Amsterdam: University of Amsterdam PhD dissertation. Carr, Charles T. . Nominal compounds in Germanic. London: Milford. Carstairs, Andrew. . Paradigm economy. Journal of Linguistics (). –. Carstairs, Andrew. . Allomorphy in inflection. London: Croom Helm. Carstairs-McCarthy, Andrew. . Inflection classes: Two questions with one answer. In Frans Plank (ed.), Paradigms. The economy of inflection, –. Berlin: De Gruyter Mouton. Carstairs-McCarthy, Andrew. . Current morphology. London/New York: Routledge. Carstairs-McCarthy, Andrew. . Inflection classes, gender, and the Principle of Contrast. Language (). –. Carstairs-McCarthy, Andrew. . How lexical semantics constrains inflectional allomorphy. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Carstairs-McCarthy, Andrew. . Paradigm structure conditions in affixal and nonaffixal inflection. In Andreas Bittner, Dagmar Bittner, & Klaus-Michael Köpcke (eds.), Angemessene Strukturen: Systemorganisation in Phonologie, Morphologie und Syntax, –. Hildesheim: Olms. Carstairs-McCarthy, Andrew. . Grammatically conditioned allomorphy, paradigmatic structure, and the ancestry constraint. Transactions of the Philological Society (). –. Carstairs-McCarthy, Andrew. a. Current morphology. London/New York: Routledge. Carstairs-McCarthy, Andrew. b. How stems and affixes interact. In Sabrina Bendjaballah, Wolfgang U. Dressler, Oskar E. Pfeiffer, & Maria Voeikova (eds.), Morphology . Selected Papers from the th Morphology Meeting, Vienna, – February , –. Amsterdam/ Philadelphia: John Benjamins. Carstairs-McCarthy, Andrew. . Phrases inside compounds: A puzzle for lexicon-free morphology. SKASE Journal of Theoretical Linguistics (). –. Carstairs-McCarthy, Andrew. . System-congruity and violable constraints in German weak declension. Natural Language and Linguistic Theory (). –. Carstairs-McCarthy, Andrew. . The evolution of morphology. Oxford: Oxford University Press. Cazden, Courtney B. . The acquisition of noun and verb inflection. Child Development . –. Channon, Rachel. . Beads on a string? Representations of repetition in spoken and signed languages. In Richard P. Meier, Kearsy Cormier, & David Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, –. Cambridge: Cambridge University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Chapman, Don & Royal Skousen. . Analogical Modeling and morphological change: The case of the adjectival negative prefix in English. English Language and Linguistics (). –. Chater, Nick, Alexander Clark, John A. Goldsmith, & Amy Perfors. . Empiricism and language learnability. Oxford: Oxford University Press. Cheng, Chenxi, Min Wang, & Charles Perfetti. . Acquisition of compound words in ChineseEnglish bilingual children: Decomposition and cross-language activation. Applied Psycholinguistics . –. Chersi, Fabian, Marcello Ferro, Giovanni Pezzulo, & Vito Pirrelli. . Topological self-organization and prediction learning support both action and lexical chains in the brain. Topics in Cognitive Science (). –. Chialant, Doriana & Alfonso Caramazza. . Where is morphology and how is it processed? The case of written word recognition. In Laurie B. Feldman (ed.), Morphological aspects of language processing, –. Hillsdale, NJ: Erlbaum. Cho, Young-Mee Yu & Peter Sells. . A lexical account of inflectional suffixes in Korean. Journal of East Asian Linguistics (). –. Chomsky, Noam. . Systems of syntactic analysis. The Journal of Symbolic Logic . –. Chomsky, Noam. . Syntactic structures. The Hague: Mouton. Chomsky, Noam. . Review of Verbal Behavior, by B.F. Skinner. Language . –. Chomsky, Noam. . Formal properties of grammars. In Duncan Luce, Robert Bush, & Eugene Galanter (eds.) (–), Handbook of mathematical psychology ( vols), vol. , –. New York: Wiley. Chomsky, Noam. . Current issues in linguistic theory. The Hague: Mouton. Chomsky, Noam. . Aspects of the theory of syntax. Cambridge, MA: MIT Press. Chomsky, Noam. . Remarks on nominalization. In Roderick A. Jacobs & Peter S. Rosenbaum (eds.), Readings in English transformational grammar, –. Waltham, MA: Ginn. Chomsky, Noam. . Studies on semantics in Generative Grammar. The Hague: Mouton. Chomsky, Noam. . The logical structure of linguistic theory. New York/London: Plenum Press. Chomsky, Noam.  []. The morphophonemics of modern Hebrew. New York: Garland Publishing [revision of  University of Pennsylvania MA thesis]. Chomsky, Noam. . Lectures on government and binding. Dordrecht: Foris. Chomsky, Noam.  [–]. The logical structure of linguistic theory. Chicago: University of Chicago Press [edited version of – manuscript, with  index; earlier edition published by Plenum Press, New York, copyright ]. Chomsky, Noam. . Knowledge of language: Its nature, origin, and use. New York: Praeger. Chomsky, Noam. . Language and problems of knowledge: The Managua lectures. Cambridge, MA: MIT Press. Chomsky, Noam. . A Minimalist program for linguistic theory. MIT Occasional Papers in Linguistics . –. Chomsky, Noam. . A Minimalist program for linguistic theory. In Kenneth Hale & Samuel Jay Keyser (eds.), The view from Building : Essays in linguistics in honor of Sylvain Bromberger, –. Cambridge, MA: MIT Press. Chomsky, Noam. . The Minimalist Program. Cambridge, MA: MIT Press. Chomsky, Noam. . Minimalist inquiries. In Roger Martin, David Michaels, and Juan Uriagereka (eds.), Step by step, –. Cambridge, MA: MIT Press. Chomsky, Noam. . Derivation by Phase. In Michael Kenstowicz (ed.), Ken Hale: A life in language, –. Cambridge, MA: MIT Press. Chomsky, Noam. . Beyond explanatory adequacy. In Adriana Belletti (ed.), Structures and beyond: The cartography of syntactic structures, vol. , –. New York: Oxford University Press. Chomsky, Noam. . Three factors in language design. Linguistic Inquiry . –. Chomsky, Noam. a. The biolinguistic program: Where does it stand today? Manuscript, MIT.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Chomsky, Noam. b. On phases. In Robert Freidin, Carlos P. Otero, & María Luisa Zubizarreta (eds.), Foundational issues in linguistic theory: Essays in honor of Jean-Roger Vergnaud, –. Cambridge, MA: MIT Press. Chomsky, Noam. . Problems of projection. Lingua . –. Chomsky, Noam & Morris Halle. . The sound pattern of English. New York: Harper & Row. Christianson, Kiel, Rebecca L. Johnson, & Keith Rayner. . Letter transpositions within and across morphemes. Journal of Experimental Psychology: Learning, Memory, and Cognition . –. Christoffels, Ingrid K., Christine Firk, & Niels O. Schiller. . Bilingual language control: an eventrelated brain potential study. Brain Research . –. Chumakina, Marina. . Nominal periphrasis: A canonical approach. Studies in Language (). –. Chumakina, Marina & Greville G. Corbett (eds.). . Periphrasis. The role of syntax and morphology in paradigms. Oxford: Oxford University Press. Cinque, Guglielmo. . Adverbs and functional heads. Oxford: Oxford University Press. Clahsen, Harald. . Lexical entries and rules of language: A multi-disciplinary study of German inflection. Behavioral and Brain Sciences . –. Clahsen, Harald & Mayella Almazán. . Syntax and morphology in Williams Syndrome. Cognition . –. Clahsen, Harald & Mayella Almazán. . Compounding and inflection in language impairment: evidence from Williams Syndrome (and SLI). Lingua . –. Clahsen, Harald, Claudia Felser, Kathleen Neubauer, Mikako Sato, & Renita Silva. . Morphological structure in native and nonnative language processing. Language Learning (). –. Clahsen, Harald, Gary Marcus, Susanne Bartke, & Richard Wiese. . Compounding and inflection in German child language. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Clahsen, Harald, Ingrid Sonnenstuhl, & James P. Blevins. . Derivational morphology in the German mental lexicon: A dual-mechanism account. In Harald Baayen & Robert Schreuder (eds.), Morphological structure in language processing, –. Berlin: De Gruyter Mouton. Clark, Eve V. . The lexicon in acquisition. Cambridge: Cambridge University Press. Clark, Eve V. . First language acquisition, nd edn. Cambridge: Cambridge University Press. Clements, Clancy. . The genesis of a language: The formation and development of Korlai Portuguese. Amsterdam/Philadelphia: John Benjamins. Clements, Clancy & Ana R. Luís. . Contact intensity and the borrowing of bound morphology in Korlai Indo-Portuguese. In Francesco Gardani, Peter Arkadiev, & Nino Amiridze (eds.), Borrowed morphology, –. Berlin: De Gruyter Mouton. Cogill-Koez, Dorothea. a. Signed language classifier predicates: linguistic structures or schematic visual representation? Sign Language and Linguistics (). –. Cogill-Koez, Dorothea. b. A model of signed language ‘Classifier Predicates’ as templated visual representation. Sign Language and Linguistics (). –. Colé, Pascale, Cécile Beauvillain, & Juan Segui. . On the representation and processing of prefixed and suffixed derived words: A differential frequency effect. Journal of Memory and Language . –. Colé, Pascale, Juan Segui, & Marcus Taft. . Words and morphemes as units for lexical access. Journal of Memory and Language . –. Collier, Scott. . The evolution of complexity in Greek noun inflection. Guildford: University of Surrey PhD dissertation. Collins, Allan M. & M. Ross Quillian. . Retrieval time from semantic memory. Journal of Verb Learning and Verbal Behavior . –. Collins, Chris. . Local economy. Cambridge, MA: MIT Press. Collobert, Ronan, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, & Pavel Kuksa. . Natural language processing (almost) from scratch. Journal of Machine Learning Research . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Coltheart, Max, Karalyn Patterson, & John C. Marshall. . Deep dyslexia. London: Routledge & Kegan Paul. Coltheart, Max, Kathleen Rastle, Conrad Perry, Robyn Langdon, & Johannes Ziegler. . DRC: A Dual Route Cascaded model of visual word recognition and reading aloud. Psychological Review . –. Comrie, Bernard. . Form and function in identifying cases. In Frans Plank (ed.), Paradigms: The economy of inflection, –. Berlin: De Gruyter Mouton. Comrie, Bernard. . When agreement gets trigger-happy. Transactions of the Philological Society (). –. Special issue on Agreement: A typological perspective edited by Dunstan Brown, Greville G. Corbett, & Carole Tiberius. Corazza, Serena. . The morphology of classifier handshapes in Italian Sign Language (LIS). In Ceil Lucas (ed.), Sign language research: Theoretical issues, –. Washington, DC: Gallaudet University Press. Corbett, Greville G. . Morphology and agreement. In Andrew Spencer & Arnold M. Zwicky (eds.), The handbook of morphology, –. Oxford: Blackwell. Corbett, Greville G. a. Agreement: Canonical instances and the extent of the phenomenon. In Geert Booij, Janet DeCesaris, Angela Ralli, & Sergio Scalise (eds.), Topics in morphology: Selected papers from the Third Mediterranean Morphology Meeting (Barcelona, September –, ), –. Barcelona: Universitat Pompeu Fabra. Corbett, Greville G. b. Agreement: The range of the phenomenon and the principles of the Surrey Database of Agreement. Transactions of the Philological Society (). –. Special issue on Agreement: A typological perspective edited by Dunstan Brown, Greville G. Corbett, & Carole Tiberius. Corbett, Greville G. . The canonical approach in typology. In Zygmunt Frajzyngier, Adam Hodges, & David S. Rood (eds.), Linguistic diversity and language theories, –. Amsterdam/ Philadelphia: John Benjamins. Corbett, Greville G. . Agreement. Cambridge: Cambridge University Press. Corbett, Greville G. a. Canonical typology, suppletion and possible words. Language (). –. Corbett, Greville G. b. Deponency, syncretism, and what lies between. In Matthew Baerman, Greville G. Corbett, Dunstan Brown, & Andrew Hippisley (eds.), Deponency and morphological mismatches, –. Oxford: Oxford University Press. Corbett, Greville G. . Determining morphosyntactic feature values: The case of case. In Greville G. Corbett & Michael Noonan (eds.), Case and grammatical relations: Papers in honour of Bernard Comrie, –. Oxford: Oxford University Press. Corbett, Greville G. . Canonical inflectional classes. In Fabio Montermini, Gilles Boyé, & Jesse Tseng (eds.), Selected Proceedings of the th Décembrettes, –. Somerville, MA: Cascadilla Proceedings Project. Corbett, Greville G. . Canonical derivational morphology. Word Structure (). –. Corbett, Greville G. . Higher order exceptionality in inflectional morphology. In Horst J. Simon & Heike Wiese (eds.), Expecting the unexpected: Exceptions in grammar, –. Berlin: De Gruyter Mouton. Corbett, Greville G. . Features. Cambridge: Cambridge University Press. Corbett, Greville G. . Canonical morphosyntactic features. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Corbett, Greville G. . Morphosyntactic complexity: A typology of lexical splits. Language . –. Corbett, Greville G. & Matthew Baerman. . Prolegomena to a typology of morphological features. Morphology (). –. Corbett, Greville G., Dunstan Brown, Marina Chumakina, & Andrew Hippisley. . Resources for suppletion: A typological database and a bibliography. In Geert Booij, Eilianao Guevara, Angela

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Ralli, Salvatore Sgroi, & Sergio Scalise (eds.), Morphology and linguistic typology. On-line proceedings of the Fourth Mediterranean Morphology Meeting (MMM) Catania – September , –. University of Bologna. Corbett, Greville G. & Sebastian Fedden. . Canonical gender. Journal of Linguistics (). –. Corbett, Greville G., Sebastian Fedden, & Raphael Finkel. . Single versus concurrent feature systems: Nominal classification in Mian. Linguistic Typology (). –. Corbett, Greville G. & Norman M. Fraser. . Network Morphology: A DATR account of Russian nominal inflection. Journal of Linguistics (). –. Corbin, Danielle. . Morphologie dérivationnelle et structuration du lexique ( vols). Tübingen: Niemeyer. Corbin, Danielle. . Préfixes et suffixes: du sens aux catégories. Journal of French Language Studies (). –. Cormier, Kearsy, Jordan Fenlon, Ramas Rentelis, & Adam Schembri. . Lexical frequency in British Sign Language conversation: A corpus-based approach. In Peter K. Austin, Oliver Bond, Lutz Marten, & David Nathan (eds.), Proceedings of the conference on Language Documentation and Linguistic Theory , –. London: School of Oriental and African Studies. Cormier, Kearsy, Adam Schembri, & Martha E. Tyrone. . One hand or two? Nativisation of fingerspelling in ASL and BANZSL. Sign Language & Linguistics (). –. Cormier, Kearsy, Adam Schembri, & Bencie Woll. . Pronouns and pointing in sign languages. Lingua . –. Coseriu, Eugenio. . Sistema, norma y habla. In Eugenio Coseriu, Teoría del lenguaje y lingüística general. Cinco estudios, –. Madrid: Gredos. Costello, Brendan. . Effects of the use of space in the agreement system of lengua de signos española (Spanish Sign Language). Amsterdam: University of Amsterdam PhD dissertation. Cowie, Claire & Christiane Dalton-Puffer. . Diachronic word-formation and studying changes in productivity over time: Theoretical and methodological considerations. In E. Díaz Vera (ed.), A changing world of words, –. Amsterdam/New York: Rodopi. Cowper, Elizabeth. . The geometry of interpretable features: INFL in English and Spanish. Language . –. Crasborn, Onno Alex, Els van der Kooij, Dafydd Waters, Bencie Woll, & Johanna Mesch. . Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics (). –. Creider, Chet & Richard Hudson. . Inflectional morphology in Word Grammar. Lingua . –. Crepaldi, Davide, Kathleen Rastle, & Colin J. Davis. . Morphemes in their place: Evidence for position specific identification of suffixes. Memory & Cognition . –. Crepaldi, Davide, Kathleen Rastle, Colin J. Davis, & Stephen J. Lupker. . Seeing stems everywhere: Position-independent identification of stem morphemes. Journal of Experimental Psychology. Human Perception and Performance . –. Crocco Galèas, Grazia. . Conversion as morphological metaphor. In Julián Méndez Dosuna & Carmen Pensado (eds.), Naturalists at Krems. Papers from the Workshop on Natural Phonology and Natural Morphology (Krems – July ), –. Salamanca: Ediciones Universidad de Salamanca. Croft, William. . Typology and universals. Cambridge: Cambridge University Press. Croft, William. . Radical construction grammar. Oxford: Oxford University Press. Crouch, Richard, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, & Paula Newman. . XLE Documentation. Palo Alto, CA: Palo Alto Research Centre. Crowley, Terry. . Pidgin and creole morphology. In Silvia Kouwenberg & John Victor Singler (eds.), The handbook of pidgin and creole studies. Oxford: Wiley-Blackwell. Cruschina, Silvio, Martin Maiden, & John Charles Smith (eds.). . The boundaries of pure morphology: Diachronic and synchronic perspectives. Oxford: Oxford University Press. Crysmann, Berthold. . Constraint-based coanalysis: Portuguese cliticisation and morphology– syntax interaction in HPSG. Saarbrücken: Universität des Saarlandes and DFKI PhD dissertation.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Crysmann, Berthold. . Floating Affixes in Polish. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, Varna, –. Stanford, CA: CSLI. Crysmann, Berthold & Olivier Bonami. . Establishing order in type-based realisational morphology. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, –. Stanford, CA: CSLI. Crysmann, Berthold & Olivier Bonami. . Variable morphotactics in Information-based Morphology. Journal of Linguistics . –. Cuervo, María Cristina. . Datives at large. Cambridge, MA: MIT PhD dissertation. Culicover, Peter & Ray Jackendoff. . Simpler syntax. Oxford: Oxford University Press. Culicover, Peter & Andrzej Nowak. . Dynamical grammar. Oxford: Oxford University Press. Curme, George O. . A grammar of the German language. London: Macmillan. Curme, George O. . A grammar of the English language. Boston: Heath. Cysouw, Michael. . The paradigmatic structure of person marking. Oxford: Oxford University Press. Cysouw, Michael. . What it means to be rare: The case of person marking. In Zygmund Frajzyngier & David Rood (eds.), Linguistic diversity and language theories, –. Philadelphia/ Amsterdam: John Benjamins. Cysouw, Michael. . Very atypical agreement indeed. Theoretical Linguistics . –. D’Esposito, Mark. . From cognitive to neural models of working memory. Philosophical Transactions of the Royal Society B. Biological Sciences . –. Dąbrowska, Ewa. . Learning a morphological system without a default: The Polish genitive. Journal of Child Language . –. Dąbrowska, Ewa & Marcin Szczerbinski. . Polish children’s productivity with case marking: the role of regularity, type frequency, and phonological diversity. Journal of Child Language (). –. Daelemans, Walter & Antal Van den Bosch. . Memory-based language processing. Cambridge: Cambridge University Press. Dahl, Östen. . The growth and maintenance of linguistic complexity. Philadelphia/Amsterdam: John Benjamins. Dalal, Rinky H. & Loeb, Diane F. (). Imitative production of regular past tense ‑ed by Englishspeaking children with specific language impairment. International Journal of Communication Disorders (). –. Dalrymple, Mary. . Lexical Functional Grammar. San Diego, CA: Academic Press. Dalrymple, Mary. . Morphology in the LFG Architecture. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of the LFG Conference, –. Stanford, CA: CSLI. Dalrymple, Mary, Ronald Kaplan, John Maxwell III, & Annie Zaenen (eds.). . Formal issues in Lexical Functional Grammar. Stanford, CA: CSLI. Davidson, Kathryn. . Quotation, demonstration, and iconicity. Manuscript, Harvard University. Davis, Colin J. . The spatial coding model of visual word identification. Psychological Review (). –. Davis, Matthew H. & Kathleen Rastle. . Form and meaning in early morphological processing: Comment on Feldman, O’Connor, and Moscoso del Prado Martin. . Psychonomic Bulletin & Review . –. Davis, Stuart & Natsuko Tsujimura. . Non-concatenative derivation: Other processes. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press. Dawkins, Richard. . Modern Greek in Asia Minor. Cambridge: Cambridge University Press. De Jorio, Andrea.  []. La mimica degli antichi investigata nel gestire napoletano. Napoli: Fibreno [reprint Sala Bolognese: Arnaldo Forni, ; English translation Gesture in Naples and gesture in classical antiquity. Bloomington: Indiana University Press, ].

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





De Pauw, Guy & Peter W. Wagacha. . Bootstrapping morphological analysis of Gīkūyū using maximum entropy learning. In Proceedings of the th Annual Conference of the International Speech Communication Association (INTERSPEECH ), –. Antwerp. DeGraff, Michel A. F. . Morphology in creole genesis: A prolegomenon. In Michael Kenstowicz (ed.), Ken Hale: A life in language, –. Cambridge, MA: MIT Press. Dékány, Éva. . A profile of the Hungarian DP: The interaction of lexicalization, agreement and linearization with the functional sequence. Tromsø: University of Tromsø PhD dissertation. Dell, Gary S. . A spreading-activation theory of retrieval in sentence production. Psychological Review . –. Dell, Gary & Joana Cholin. . Language production: Computational models. The Cambridge handbook of psycholinguistics, –. Cambridge: Cambridge University Press. DeLong, Katherine A., Thomas P. Urbach, & Marta Kutas. . Probabilistic word pre-activation during language comprehension inferred from electrical brain activity. Nature Neuroscience . –. Demske, Ulrike. . Zur Geschichte der ung-Nominalisierung im Deutschen: Ein Wandel morphologischer Produktivität. Beiträge zur Geschichte der Deutschen Sprache und Literatur (). –. Deo, Ashwini. . Derivational morphology in inheritance-based lexica: Insights from Pāṇ ini. Lingua . –. Desmets, Marianne & Florence Villoing. . French VN lexemes: Morphological compounding in HPSG. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, Georg-August-Universität Göttingen, Germany, –. Stanford, CA: CSLI. Di Sciullo, Anna Maria & Daniela Isac. . The asymmetry of Merge. Biolinguistics (). –. Di Sciullo, Anna Maria & Edwin Williams. . On the definition of word. Cambridge, MA: MIT Press. Diependaele, Kevin, Dominiek Sandra, & Jonathan Grainger. . Masked cross-modal morphological priming: Unravelling morpho-orthographic and morpho-semantic influences in early word recognition. Language and Cognitive Processes (). –. Diependaele, Kevin, Dominiek Sandra, & Jonathan Grainger. . Semantic transparency and masked morphological priming: The case of prefixed words. Memory & Cognition . –. Dijkhoff, Marta. . Papiamentu word formation: A case study of complex nouns and their relation to phrases and clauses. Amsterdam: University of Amsterdam PhD dissertation. Dima, Corina & Erhard Hinrichs. . Automatic noun compound interpretation using deep neural networks and word embeddings. Proceedings of the th International Conference on Computational Semantics (IWCS ). London, UK. Dimmendaal, Gerrit. . Nilo-Saharan. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press. Dively, Valery. . Signs without hands: Nonhanded signs in American Sign Language. In Valery Dively, Melanie Metzger, Sarah Taub, & Ann Marie Baer (eds.), Signed languages: discoveries from international research, –. Washington, DC: Gallaudet University Press. Dixon, James A. & Virginia A. Marchman. . Grammar and the lexicon: Developmental ordering in language acquisition. Child Development . –. Dixon, R. M. W. & Alexandra Aikhenvald (eds.). a. Word: A cross-linguistic typology. Cambridge: Cambridge University Press. Dixon, R. M. W. & Alexandra Aikhenvald. b. Word: A typological framework. In R. M. W. Dixon & Alexandra Y. Aikhenvald (eds.), Word: A cross-linguistic typology, –. Cambridge: Cambridge University Press. Döhler, Christian. . Morphological complexity in Komnzo verbs. Presentation at the international conference Diversity Linguistics: Retrospect and Prospect, Leipzig, May .

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Döhler, Christian. . Komnzo: A language of Southern New Guinea. Canberra: Australian National University PhD dissertation. Dohmes, Petra, Pienie Zwitserlood, & Jens Bölte. . The impact of semantic transparency of morphologically complex words on picture naming. Brain and Language . –. Doke, Clement M. & S. Machabe Mofokeng. . Textbook of Southern Sotho grammar, nd edn. Cape Town: Maskew Miller Longman. Dokulil, Miklos. . Zur Theorie der Wortbildung. Wissenschaftliche Zeitschrift der Karl-MarxUniversität Leipzig, Gesellschafts- und Sprachwissenschaftliche Reihe . –. Doleschal, Ursula and Anna M. Thornton (eds.). . Extragrammatical and marginal morphology. Munich: LINCOM. Donegan, Patricia J. . Constraints and processes in phonological perception. In Katarzyna Dziubalska-Kołaczyk (ed.), Constraints and preferences, –. Berlin: De Gruyter Mouton. Donegan, Patricia J. & David Stampe. . The study of natural phonology. In Daniel A. Dinnsen (ed.), Current approaches to phonological theory, –. Bloomington/London: Indiana University Press. Downing, Laura J. a. Prosodic misalignment and reduplication. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Downing, Laura J. b. On the prosodic misalignment of onsetless syllables. NLLT . –. Downing, Laura J. . Morphological constraints on Bantu reduplication. Linguistic Analysis (–). –. Downing, Laura J. . Morphological and prosodic constraints on Kinande verbal reduplication. Phonology . –. Downing, Laura J. . Compounding and tonal non-transfer in Bantu languages. Phonology . –. Downing, Laura J. . Canonical forms in Prosodic Morphology. Oxford: Oxford University Press. Downing, Laura J. . Review of Reduplication: Doubling in morphology. Lingua . –. Downing, Laura J. & Sharon Inkelas. . What is reduplication? Typology and analysis part /: The analysis of reduplication. Language and Linguistics Compass (). –. Downing, Pamela. . On the creation and use of English compound nouns. Language (). –. Dowty, David R. . Toward a minimalist theory of syntactic structure. In Pauline Jacobson & Geoff Pullum (eds.), Syntactic discontinuity, –. Dordrecht: Reidel. Dresher, Elan & Harry van der Hulst. . Head-dependent asymmetries in phonology: Complexity and visibility. Phonology . –. Dressler, Wolfgang U. . Diachronic puzzles for natural phonology. In Anthony Bruck, Robert A. Fox, & Michael W. La Galy (eds.), Papers from the parasession on natural phonology, –. Chicago: Chicago Linguistic Society. Dressler, Wolfgang U. a. Grundfragen der Morphonologie. Vienna: Akademie Verlag. Dressler, Wolfgang U. b. Elements of a polycentristic theory of word formation. Wiener Linguistische Gazette . –. Dressler, Wolfgang U. a. On word formation in natural morphology. Wiener Linguistische Gazette . – [also appeared in Proceedings of the th International Congress of Linguists (Tokyo ), –. Tokyo, ]. Dressler, Wolfgang U. b. Language shift and language death—a Protean challenge for the linguist. Folia Linguistica (–). –. Dressler, Wolfgang U. c. General principles of poetic license in word formation. In Horst Geckeler (ed.), Logos Semantikos. Festschrift für Eugenio Coseriu, vol. , –. Berlin: De Gruyter Mouton. Dressler, Wolfgang U. . Subtraction in word formation and its place within a theory of Natural Morphology. Quaderni di Semantica (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Dressler, Wolfgang U. a. On the predictiveness of Natural Morphology. Journal of Linguistics . –. Dressler, Wolfgang U. b. Typological aspects of Natural Morphology. Acta Linguistica Hungarica . –. Dressler, Wolfgang U. c. Morphonology: The dynamics of derivation. Ann Arbor: Karoma Press. Dressler, Wolfgang U. . Explanations in Natural Morphology, illustrated with comparative and agent-noun formation. Linguistics . –. Dressler, Wolfgang U. . Word formation as part of natural morphology. In Wolfgang U. Dressler, Willy Mayerthaler, Oswald Panagl, & Wolfgang U. Wurzel (eds.), Leitmotifs in Natural Morphology, –. Amsterdam/Philadelphia: John Benjamins. Dressler, Wolfgang U. . Preferences vs. strict universals in morphology: Word-based rules. In Michael Hammond & Michael Noonan (eds.), Theoretical morphology: Approaches in modern linguistics, –. New York: Academic Press. Dressler, Wolfgang U. . Prototypical differences between inflection and derivation. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung . –. Dressler, Wolfgang U. a. The cognitive perspective of ‘naturalist’ linguistic models. Cognitive Linguistics . –. Dressler, Wolfgang U. b. Sketching submorphemes within natural morphology. In Julián Méndez Dosuna & Carmen Pensado (eds.), Naturalists at Krems. Papers from the Workshop on Natural Phonology and Natural Morphology (Krems – July ), –. Salamanca: Ediciones Universidad de Salamanca. Dressler, Wolfgang U. a. Diminutivbildung als nicht-prototypische Wortbildungsregel. In KlausMichael Köpcke (ed.), Funktionale Untersuchungen zur deutschen Nominal- und Verbalmorphologie, –. Tübingen: Niemeyer. Dressler, Wolfgang U. b. Evidence of the first stages of morphology acquisition for linguistic theory: Extragrammatic morphology and diminutives. Acta Linguistica Hafniensia (). –. Dressler, Wolfgang U. a. Language death. In Rajendra Singh (ed.), Towards a critical sociolinguistics, –. Amsterdam/Philadelphia: John Benjamins. Dressler, Wolfgang U. b. Parallelisms between natural textlinguistics and other components of natural linguistics. Sprachtypologie und Universalienforschung (STUF) (). –. Dressler, Wolfgang U. a. Naturalness. In Geert Booij, Christian Lehmann, & Joachim Mugdan (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Dressler, Wolfgang U. b. Extragrammatical vs. marginal morphology. In Ursula Doleschal & Anna M. Thornton (eds,). Extragrammatical and marginal morphology, –. Munich: LINCOM. Dressler, Wolfgang U. c. Subtraction. In Geert Booij, Christian Lehmann, & Joachim Mugdan (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Dressler, Wolfgang U. . Naturalness and morphological change. In Brian D. Joseph & Richard D. Janda (eds.), The handbook of historical linguistics, –. Oxford: Blackwell. Dressler, Wolfgang U. . Word-formation in Natural Morphology. In Pavol Štekauer & Rochelle Lieber (eds.), Handbook of word-formation, –. Dordrecht: Springer. Dressler, Wolfgang U. . Compound types. In Gary Libben & Gonia Jarema (eds.), The representation and processing of compound words, –. Oxford: Oxford University Press. Dressler, Wolfgang U. & Gianfranco Denes. . Word formation in Italian-speaking Wernicke’s and Broca’s aphasics. In Wolfgang U. Dressler & Jacqueline Stark (eds.), Linguistic analyses of aphasic language, –. New York: Springer. Dressler, Wolfgang U. & Annemarie Karpf. . The theoretical relevance of pre- and protomorphology in language acquisition. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Dressler, Wolfgang U., Willi Mayerthaler, Oskar Panagl, & Wolfgang U. Wurzel. . Leitmotifs in natural morphology. Amsterdam/Philadelphia: John Benjamins. Dressler, Wolfgang U. & Lavinia Merlini Barbaresi. a. Italian dimuinutives as non-prototypical word formation. In Livia Tonelli & Wolfgang U. Dressler (eds.), Natural morphology: Perspectives for the nineties, –. Padova: Unipress. Dressler, Wolfgang U. & Lavinia Merlini Barbaresi. b. Morphopragmatics: Diminutives and intensifiers in Italian, German, and other languages. Berlin: De Gruyter Mouton. Dressler, Wolfgang U. & Karlheinz Mörth. . Produktive and weniger produktive Komposition in ihrer Rolle im Text an Hand der Beziehungen zwischen Titel und Text. In Livio Gaeta & Barbara Schlücker (eds.), Das Deutsche als kompositionsfreudige Sprache. Strukturelle Eingeschaften und systembezogene Aspekte, –. Berlin: De Gruyter Mouton. Dressler, Wolfgang U. & Livia Tonelli (eds.). . Natural Phonology from Eisenstadt: Papers on Natural Phonology from the Fifth International Phonology Meeting, June –. Padova: CLESP. Drews, Etta & Pienie Zwitserlood. . Morphological and orthographic similarity in visual word recognition. Journal of Experimental Psychology: Human Perception and Performance . –. Dryer, Matthew. a. Prefixing versus suffixing in inflectional morphology. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World atlas of language structures, –. Oxford: Oxford University Press. Dryer, Matthew. b. Position of case affixes. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World atlas of language structures, –. Oxford: Oxford University Press. Dryer, Matthew. c. Position of pronominal possessive affixes. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World atlas of language structures, –. Oxford: Oxford University Press. Dryer, Matthew. d. Position of tense-aspect affixes. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World atlas of language structures, –. Oxford: Oxford University Press. Dulay, Heidi C. & Marina K. Burt. . Natural sequences in child second language acquisition. Language Learning (). –. Duñabeitia, Jon Androni, Sachiko Kinoshita, Manuel Carreiras, & Dennis Norris. . Is morphoorthographic decomposition purely orthographic? Evidence from masked priming in the samedifferent task. Language and Cognitive Processes . –. Duñabeitia, Jon Andoni, Itziar Laka, Manuel Perea, & Manuel Carreiras. . Is milkman a superhero like batman? Constituent morphological priming in compound words. European Journal of Cognitive Psychology . –. Duñabeitia, Jon Andoni, Manuel Perea, & Manuel Carreiras. . The role of the frequency of constituents in compound words: Evidence from Basque and Spanish. Psychonomic Bulletin & Review . –. Dziubalska-Kołaczyk, Katarzyna (ed.). . Intercomponential parallelisms in Natural Linguistics. Special issue of Sprachtypologie und Universalienforschung (). Dziubalska-Kołaczyk, Katarzyna. . Challenges for Natural Linguistics in the st century: A personal view. In Katarzyna Dziubalska-Kołaczyk & Jarosław Weckwerth (eds.), Future challenges for Natural Linguistics, –. Munich: LINCOM. Dziubalska-Kołaczyk, Katarzyna & Jarosław Weckwerth (eds.). . Future challenges for Natural Linguistics. Munich: LINCOM. Eggert, Elmar. . Morphological variation in the construction of French names for inhabitants. In Franz Rainer, Wolfgang U. Dressler, Dieter Kastofsky, & Hans Christian Luschützky (eds.), Variation and change in morphology, –. Amsterdam/Philadelphia: John Benjamins. Eisenberg, Peter. . Suffixreanalyse und Syllabierung. Zum Verhältnis von phonologischer und morphologischer Segmentierung. Folia Linguistica Historica (–). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





El Yagoubi, Radouane, Valentina Chiarelli, Sara Mondini, Gelsomina Perrone, Morena Danieli, & Carlo Semenza. . Neural correlates of Italian nominal compounds and potential impact of headedness effect: An ERP study. Cognitive Neuropsychology . –. Elman, Jeffrey, Annette Karmiloff-Smith, Elizabeth Bates, Mark Johnson Domenico Parisi, & Kim Plunkett. . Rethinking innateness: A connectionist perspective on development. Cambridge, MA: MIT Press. Elsen, Hilke. . Affixoide: Nur was benannt wird, kann auch verstanden werden. Deutsche Sprache (). –. Embick, David. . Features, syntax, and categories in the Latin perfect. Linguistic Inquiry (). –. Embick, David. . Linearization and Local Dislocation: Derivational mechanics and interactions. Linguistic Analysis (–). –. Embick, D. . Localism versus globalism in morphology and phonology. Cambridge, MA: MIT Press. Embick, David. . On the targets of phonological realization. Paper presented at the MSPI Workshop at Stanford University.  October . Embick, David & Alec Marantz. . Architecture and blocking. Linguistic Inquiry (). –. Embick, David & Rolf Noyer. . Distributed morphology and the syntax/morphology interface. In Gillian Ramchand & Charles Reiss (eds.), The Oxford handbook of linguistic interfaces, –. Oxford: Oxford University Press. Emmorey, Karen. . The confluence of space and language in signed languages. In Clayton Valli & Ceil Lucas (eds.), Linguistics of American Sign Language: An introduction, rd edn, –. Washington, DC: Gallaudet University Press. Emmorey, Karen. . The psycholinguistics of signed and spoken languages: How biology affects processing. In Gareth Gaskell (ed.), The Oxford handbook of psycholinguistics, –. Oxford: Oxford University Press. Emmorey, Karen, Thomas Grabowski, Stephen McCullough, Laura L. B. Ponto, Richard D. Hichwa, & Hanna Damasio. . The neural correlates of spatial language in English and American Sign Language: A PET study with hearing bilinguals. Neuroimage (). –. Engberg-Pedersen, Elisabeth. . Pragmatics of nonmanual behavior in Danish Sign Language. In William H. Edmondson & Fred Karlsson (eds.), SLR : Papers from the Fourth International Symposium on Sign Language Research, Lappeenranta, Finland, July –, , –. Hamburg: Signum. Engberg-Pedersen, Elisabeth. . Space in Danish Sign Language: The semantics and morphosyntax of the use of space in a visual language. Hamburg: Signum. Eppler, Eva. . Emigranto: The syntax of German-English code-switching. Vienna: Braunmüller. Epstein, Samuel D., Hisatsugu Kitahara, & T. Daniel Seely. . Labeling by minimal search: Implications for successive-cyclic A-movement and the conception of the postulate ‘Phase’. Linguistic Inquiry (). –. Epstein, Samuel D. & T. Daniel Seely. . Derivations in minimalism. Cambridge: Cambridge University Press. Erelt, Mati, Tiiu Erelt, & Kristiina Ross. . Eesti keele käsiraamat. Tallinn: Eesti Keele Sihtasutus. Erlenkamp, Sonja. . Syntaktische Kategorien und lexikalische Klassen. Typologische Aspekte der Deutschen Gebärdensprache. Munich: LINCOM. Evans, Nicholas. . A grammar of Kayardild. With historical-comparative notes on Tangkic. Berlin: De Gruyter Mouton. Evans, Nicholas. a. Bininj Gun-Wok. A pandialectal grammar of Mayali, Kunwinjku and Kune,  vols. Canberra: Australian National University. Evans, Nicholas. b. Typologies of agreement: Some problems from Kayardild. Transactions of the Philological Society (). –. Special issue on Agreement: A typological perspective edited by Dunstan Brown, Greville G. Corbett, & Carole Tiberius.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Evans, Nicholas. . Semantic typology. In Jae-Jung Song (ed.), The Oxford handbook of linguistic typology, –. Oxford: Oxford University Press. Evans, Nicholas. . Even more diverse than we had thought: The multiplicity of Trans-Fly languages. In Nicholas Evans & Marian Klamer (eds.), Melanesian languages on the edge of Asia: challenges for the st century, –. Honolulu: Hawai‘i University Press. Evans, Nicholas. . Some problems in the typology of quotation: A canonical approach. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Evans, Nicholas. . Inflection in Nen. In Matthew Baerman (ed.), The Oxford handbook of inflection, –. Oxford: Oxford University Press. Evans, Nicholas, Dunstan Brown, & Greville G. Corbett. . The semantics of gender in Mayali: partially parallel systems and formal implementation. Language . –. Evans, Roger & Gerald Gazdar. . DATR: A language for lexical knowledge representation. Computational Linguistics (). –. Everaert, Martin. . The criteria for reflexivization. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Eynde, Frank Van. . Auxiliaries and verbal affixes: A monostratal cross-linguistic analysis. Leuven: Katholieke Universiteit Leuven Habilitation thesis. Fabb, Nigel. . English suffixation is constrained only by selectional restrictions. Natural Language and Linguistic Theory (). –. Fabb, Nigel. . Compounding. In Andrew Spencer & Arnold M. Zwicky (eds.), The handbook of morphology, –. Oxford: Blackwell. Fábregas, Antonio. . An exhaustive lexicalization account of directional complements. Nordlyd: Tromsø University Working Papers on Language & Linguistics (). –. Special Issue on Space, Motion, and Result edited by Monika Bašić, Marina Pantcheva, Minjeong Son, & Peter Svenonius. Fábregas, Antonio & Francesca Masini. . Prominence in morphology: The notion of head. Lingue e Linguaggio XIV(). –. Falk, Yehuda. . The English auxiliary system: A lexical-functional analysis. Language (). –. Falk, Yehuda. . Lexical-functional grammar: An introduction to parallel constraint-based syntax. Stanford, CA: CSLI. Falk, Yehuda. . On the representation of case and agreement. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of the LFG Conference. Stanford, CA: CSLI. Falkenstein, Michael, Joachim Hohnsbein, Jörg Hoormann, & Ludger Blanke (). Effects of errors in choice reaction tasks on the ERP under focused and divided attention. In Cornelis Henri Marie Brunia, Anthony W. K. Gaillard, & Albert Kok (eds.), Psychophysiological Brain Research, vol. , –. Tilburg: Tilburg University Press. Faltz, Leonard. . Some notes on derivational relationships among Navajo verbs. In Andrew Carnie, Eloise Jelinek, & Mary Anne Willie (eds.), Papers in Honor of Ken Hale. Working Papers on Endangered and Less Familiar Languages . –. Cambridge, MA: MIT Working Papers in Linguistics. Fanciullo, Franco. . Per una interpretazione dei verbi italiani a ‘inserto’ velare. Archivio Glottologico Italiano LXXXIII(II). –. Farquharson, Joseph T. (). Creole morphology revisited. In Umberto Ansaldo, Stephen Matthews, & Lisa Lim (eds.), Deconstructing creole, –. Amsterdam/Philadelphia: John Benjamins. Farrar, Scott. . An ontological approach to Canonical Typology: Laying the foundations for e-Linguistics. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Faurot, Karla, Dianne Dellinger, Andy Eatough, & Steve Parkhurst. . The identity of Mexican sign as a language. SIL Survey Report. Internet Publication [http://www.sil.org/mexico/lenguajes-designos/Gi-Identity-mfs.pdf, accessed  August ].

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Fedden, Sebastian & Greville G. Corbett. . Gender and classifiers in concurrent systems: Refining the typology of nominal classification. Glossa: A Journal of General Linguistics (). Art. . –. Federmeier, Kara D. . Thinking ahead: The role and roots of prediction in language comprehension. Psychophysiology . –. Feldman, Jerome A. . From molecule to metaphor: A neural theory of language. Cambridge, MA and London: MIT Press/Bradford. Feldman, Laurie Beth. . Are morphological effects distinguishable from the effects of shared meaning and shared form? Journal of Experimental Psychology: Learning, Memory, and Cognition . –. Feldman, Laurie B., Emily G. Soltano, Matthew J. Pastizzo, & Sarah E. Francis. . What do graded effects of semantic transparency reveal about morphological processing? Brain and Language . –. Fernald, Theodore B. . Athabaskan Satellites and ASL Ion-morphs. In Sandy Chung, Jim McCloskey, & Nathan Sanders (eds.), Jorge Hankamer Webfest. [http://babel.ucsc.edu/Jorge/ fernald.html, accessed  August ]. Fernald, Theodore B. & Donna Jo Napoli. . Exploitation of morphological possibilities in signed languages: Comparison of American Sign Language with English. Sign Language & Linguistics (). –. Figueiredo-Silva, Maria Cristina & Fabíola Ferreira Sucupira Sell. . Algumas notas sobre os compostos em português brasileiro e em LIBRAS. [http://www.linguistica.fflch.usp.br/sites/linguistica.fflch.usp.br/files/compostos_usp.pdf, accessed  August ]. Fillmore, Charles. . Berkeley construction grammar. In Thomas Hoffmann & Graeme Trousdale (eds.), The Oxford handbook of Construction Grammar, –. Oxford: Oxford University Press. Fillmore, Charles J., Paul Kay, & Mary Catherine O’Connor. . Regularity and idiomaticity in grammatical constructions: The case of let alone. Language (). –. Finin, Timothy W. . The semantic interpretation of compound nominals. Proceedings of the First National Conference on AI, –, Stanford, CA. Finkel, Raphael & Gregory Stump. . Principal parts and morphological typology. Morphology . –. Finkel, Raphael & Gregory Stump. . Principal parts and degrees of paradigmatic transparency. In James P. Blevins & Juliette Blevins (eds.), Analogy in grammar, –. Oxford: Oxford University Press. Fiorentino, Robert & David Poeppel. . Compound words and structure in the lexicon. Language and Cognitive Processes . –. Firth, John Rupert. . Speech. London: Ernest Benn. Firth, John Rupert (ed.). . Papers in linguistics –. London: Oxford University Press. Firth, John Rupert. . A synopsis of linguistic theory, –. In Frank R. Palmer (ed.), Selected Papers of J. R. Firth –, –. Bloomington: Indiana University Press. Fischer, Olga. . Cognitive iconic grounding of reduplication in language. Semblance and Signification: Iconicity in Language and Literature . –. Fischer, Susan D. . Two processes of reduplication in the American Sign Language. Foundations of Language (). –. Fischer, Susan D. . Sign languages East and West. In Piet van Sterkenburg (ed.), Unity and diversity of languages, –. Amsterdam/Philadelphia: John Benjamins. Fischer, Susan D., Yu Hung, & Shih-Kai Liu. . Numeral incorporation in Taiwan Sign Language. In Jung-hsing Chang & Jenny Yichun Kuo (eds.), Language and cognition: Festschrift in honor of James H-Y. Tai on his th birthday, –. Taipei: The Crane Publishing. Flach, Susanne, Kristin Kopf, & Anatol Stefanowitsch. . Skandale und Skandälchen konstrastiv: Das Konfix ‑gate im Deutschen und Englischen. In Rita Heuser & Mirjam Schmuck (eds.), Sonstige Namenarten: Stiefkinder der Onomastik, –. Berlin: De Gruyter Mouton.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Fleischer, Wolfgang & Irmhild Barz. . Wortbildung der deutschen Gegenwartssprache. . Auflage; völlig neu bearbeitet von Irmhild Barz unter Mitarbeit von Marianne Schröder (De Gruyter Studium). Berlin: De Gruyter Mouton. Flickinger, Dan. . Lexical rules in the hierarchical lexicon. Stanford: Stanford University PhD dissertation. Flickinger, Dan. . On building a more efficient grammar by exploiting types. Natural Language Engineering (). –. Folli, Raffaella & Heidi Harley. . Flavours of v. In Paula Kempchinsky & Roumyana Slabakova (eds.), Aspectual inquiries, –. Dordrecht: Springer. Folli, Raffaella & Heidi Harley. . On the licensing of causatives of directed motion: Waltzing Matilda all over. Studia Linguistica (). –. Forker, Diana. . A canonical approach to the argument/adjunct distinction. Linguistic Discovery (). –. Forker, Diana. . Conceptualization in current approaches of language typology. Acta Linguistica Hafniensia . –. Forster, Kenneth I., Christopher Davis, C. Schoknecht, & R. Carter. . Masked priming with graphemically related forms: Repetition or partial activation? The Quarterly Journal of Experimental Psychology . –. Fortin, Antonio. . The morphology and semantics of expressive affixes. Oxford: University of Oxford PhD dissertation. Fortune, Reo F. . Arapesh. Publications of the American Ethnological Society XIX. New York: J. J. Augustin. Fox, Barbara, Fay Wouk, Steven Fincke, Wilfredo Hernandez Flores, Makoto Hayashi, Minna Laakso, Yael Maschler, Abolghasem Mehrabi, Marja-Leena Sorjonen, Susanne Uhmann, & Hyun Jung Yang. . Morphological self-repair. Self-repair within the word. Studies in Language (). –. Fradin, Bernard. . On morphological entities and the Copy Principle. Acta linguistica hungarica (). –. Fradin, Bernard. . Adéquation terminologique et adéquation descriptive en linguistique: le terme de sous-catégorisation. In Bernard Colombat & Marie Savelli (eds.), Métalangage et terminologie linguistique, –. Leuven: Peeters. Franceschina, Florencia. . Morphological or syntactic deficits in near-native speakers? An assessment of some current proposals. Second Language Research (). –. Frank, Anette & Annie Zaenen. . Tense in LFG: Syntax and morphology. In Hans Kamp & Uwe Reyle (eds.), How we say WHEN it happens, –. Tübingen: Niemeyer. Frank, Wright J. . Nuer noun morphology. Buffalo: State University of New York MA thesis. Fraser, Norman M. & Greville G. Corbett. . Defaults in Arapesh. Lingua . –. Frauenfelder, Uli H. & Robert Schreuder. . Constraining psycholinguistic models of morphological processing and representation: The role of productivity. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Freidin, Robert. . Syntactic structures redux. Syntax (). –. Freidin, Robert & Jean-Roger Vergnaud. . Exquisite connections: Some remarks on the evolution of linguistic theory. Lingua . –. Freudenthal, Daniel, Julian Pine, & Fernand Gobet. . Simulating the referential properties of Dutch, German and English root infinitives in MOSAIC. Language Learning and Development . –. Friederici, Angela D., Erdmut Pfeifer, & Anja Hahne. . Event-related brain potentials during natural speech processing: Effects of semantic, morphological and syntactic violations. Cognitive Brain Research . –. Friedline, Benjamin E. . Challenges in the second language acquisition of derivational morphology: From theory to practice. Pittsburgh: University of Pittsburgh PhD dissertation. Fries, Charles. . American English grammar. New York: D. Appleton-Century.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Fries, Charles. . Linguistics: The study of language. New York: Holt, Rinehart and Winston. Frishberg, Nancy. . Sharp and soft: Two aspects of movement in sign. Working paper, La Jolla, CA: Salk Institute for Biological Studies. Frishberg, Nancy. . Arbitrariness and iconicity: Historical change in American Sign Language. Language (). –. Frishberg, Nancy & Bonnie Gough. . Morphology in American Sign Language. Sign Language & Linguistics (). –. Fromkin, Victoria A. . Speech errors as linguistic evidence. The Hague, Paris: Mouton. Frost, Ram, Tamar Kugler, Avital Deutsch, & Kenneth I. Forster. . Orthographic structure versus morphological structure: Principles of lexical organization in a given language. Journal of Experimental Psychology: Learning, Memory, and Cognition . –. Fruchter, Joseph, Linnaea Stockall, & Alec Marantz. . MEG masked priming evidence for formbased decomposition of irregular verbs. Frontiers in Human Neuroscience, . DOI: http://dx.doi. org/./fnhum... Fuks, Orit & Yishai Tobin. . The signs B and B-bent in Israeli sign language according to the theory of Phonology as Human Behavior. Clinical Linguistics & Phonetics (–). –. Funnell, Elaine. . Morphological errors in acquired dyslexia: A case of mistaken identity. Quarterly Journal of Experimental Psychology . –. Gaeta, Livio. . Italian loan words in the inflexional noun system of Modern German. Folia Linguistica (–). –. Gaeta, Livio. a. Growth of symbols: The inexorable fate of diagrams. In Katarzyna DziubalskaKołaczyk & Jarosław Weckwerth (eds.), Future challenges for Natural Linguistics, –. Munich: LINCOM. Gaeta, Livio. b. Quando i verbi compaiono come nomi. Un saggio di morfologia naturale. Milano: Franco Angeli. Gaeta, Livio. c. Umlaut extension in German modals as natural change. Diachronica (). –. Gaeta, Livio. . Suffissi non produttivi. In Maria Grossmann & Franz Rainer (eds.), La formazione delle parole in italiano, –. Tübingen: Niemeyer. Gaeta, Livio. . How to live naturally and not be bothered by economy. Folia Linguistica (–). –. Gaeta, Livio. . Is analogy economic? In Fabio Montermini, Gilles Boyé, & Nabil Hathout (eds.), Selected Proceedings of the th Décembrettes: Morphology in Toulouse, –. Somerville, MA: Cascadilla Proceedings Project. Gaeta, Livio. . Die deutsche Pluralbildung zwischen deskriptiver Angemessenheit und Sprachtheorie. Zeitschrift für germanistische Linguistik (). –. Gaeta, Livio. . Analogical change. In Silvia Luraghi & Vit Bubenik (eds.), Continuum companion to historical linguistics, –. London/New York: Continuum. Gaeta, Livio. . Affix ordering and conversion: Looking for the place of zero. Lingue e Linguaggio (). –. Gaeta, Livio. a. Lexeme formation in a conscious approach to the lexicon. In Laurie Bauer, Lívia Körtvélyessy, & Pavol Štekauer (eds.), Semantics of complex words, –. Dordrecht: Springer. Gaeta, Livio. b. Restrictions in word formation. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton. Gaeta, Livio. c. Evaluative morphology and sociolinguistic variation. In Nicola Grandi & Lívia Körtvélyessy (eds.), Edinburgh handbook of evaluative morphology, –. Edinburgh: Edinburgh University Press. Gaeta, Livio. . Irregularität und Systemangemessenheit. In Andreas Bittner & Klaus-Michael Köpcke (eds.), Prozesse der Regularität und Irregularität in Phonologie und Morphologie, –. Berlin: De Gruyter Mouton. Gaeta, Livio & Davide Ricca. . Productivity in Italian word formation: A variable-corpus approach. Linguistics (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Gaeta, Livio & Davide Ricca. . Composita solvantur. Compounds as lexical units or morphological objects? Italian Journal of Linguistics/Rivista di linguistica (). –. Gaeta, Livio & Davide Ricca. . Productivity. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton. Gagné, Christina L. . Lexical and relational influences on the processing of novel compounds. Brain and Language . –. Gagné, Christina L. & Edward J. Shoben. . Influence of thematic relations on the comprehension of modifier-noun compounds. Journal of Experimental Psychology: Learning, Memory and Cognition (). –. Gagné, Christina L. & Thomas L. Spalding. . Constituent integration during the processing of compound words: Does it involve the use of relational structures? Journal of Memory and Language . –. Gagné, Christina L., Thomas L. Spalding, Lauren Figueredo, & Allison C. Mullaly. . Does snow man prime plastic snow? The effect of constituent position in using relational information during the interpretation of modifier-noun phrases. The Mental Lexicon . –. Gallego, Ángel. . Phase theory. Amsterdam/Philadelphia: John Benjamins. Gallego, Ángel (ed.). . Phases: developing the framework. Berlin: Mouton. Gallistel, C. R. & Adam Philip King. . Memory and the computational brain. Why cognitive science will transform neuroscience. Malden & Oxford: Wiley/Blackwell. Ganushchak, Lesya Y., Ingrid K. Christoffels, & Niels O. Schiller. . The use of electroencephalography in language production research: A review. Frontiers in Psychology . . Ganushchak, Lesya Y. & Niels O. Schiller. . Effects of time pressure on verbal self-monitoring: An ERP study. Brain Research . –. Ganushchak, Lesya Y. & Niels O. Schiller. . Motivation and semantic context affect brain errormonitoring activity: An event-related brain potentials study. NeuroImage . –. Gardani, Francesco. . Borrowing of inflectional morphemes in language contact. Frankfurt am Main: Peter Lang. Gardani, Francesco, Peter Arkadiev, & Nino Amiridze (eds.). . Borrowed morphology. Berlin: De Gruyter Mouton. Gärtner, Hans-Martin. . Generalized transformations and beyond: Reflections on Minimalist syntax. Berlin: Akademie Verlag. Gaskell, Gareth (ed.). . The Oxford handbook of psycholinguistics. Oxford: Oxford University Press. Gawlitzek-Maiwald, Ira. . How do children cope with variation in the input? The case of German plurals and compounding. In Rosemarie Tracy & Elsa Lattey (eds.), How tolerant is universal grammar? Essays on language learnability and language variation, –. Tübingen: Niemeyer. Gay, Linda S. & W. Bruce Croft. . Interpreting nominal compounds for information retrieval. Information Processing & Management (). –. Gazdar, Gerald. . Paradigm function morphology in DATR. In Lynne Cahill & Richard Coates (eds.), Sussex Papers in General and Computational Linguistics (Cognitive Science Research Paper CSRP ), –. Brighton: University of Sussex. Gazdar, Gerald, Ewan Klein, Geoffrey K. Pullum, & Ivan Sag. . Generalized Phrase Structure Grammar. Cambridge, MA: Harvard University Press. Geertzen, Jeroen, James P. Blevins, & Petar Milin. . The informativeness of linguistic unit boundaries. Italian Journal of Linguistics (). –. Gernsbacher, Morton A. . Resolving  years of inconsistent interactions between lexical familiarity and orthography, concreteness, and polysemy. Journal of Experimental Psychology: General . –. Giegerich, Heinz. . Lexical strata in English. Cambridge: Cambridge University Press. Giegerich, Heinz. . Synonymy blocking and the elsewhere condition: Lexical morphology and the speaker. Transactions of the Philological Society (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Giegerich, Heinz. . The English compound stress myth. Word Structure (). –. Gijn, Rik van & Fernando Zúñiga. . Word and the Americanist perspective. Morphology . –. Ginzburg, Jonathan & Ivan Sag. . Interrogative investigations. Stanford, CA: CSLI. Giraudo, Hélène & Jonathan Grainger. . Effects of prime word frequency and cumulative root frequency in masked morphological priming. Language and Cognitive Processes (/). –. Giraudo, Hélène & Jonathan Grainger. . Priming complex words: Evidence for supralexical representation of morphology. Psychonomic Bulletin & Review . –. Gisborne, Nikolas. . The event structure of perception verbs. Oxford: Oxford University Press. Gisborne, Nikolas. . Constructions, word grammar, and grammaticalization. Cognitive Linguistics /. –. Gisborne, Nikolas. . The semantics of definite expressions and the grammaticalization of THE. Studies in Language /. –. Gisborne, Nikolas. . Defaulting to the new Romance synthetic future. In Nikolas Gisborne & Andrew Hippisley (eds.), Defaults in morphological theory, –. Oxford: Oxford University Press. Gisborne, Nikolas & Andrew Hippisley (eds.). . Defaults in morphological theory. Oxford: Oxford University Press. Givón, T. . Isomorphism in the grammatical code: cognitive and biological considerations. Studies in Language (). –. Gleitman, Lila & Henry Gleitman. . Phrase and paraphrase: Some innovative uses of language. New York: W. W. Norton. Glück, Susanne & Roland Pfau. . A distributed morphology account of verbal inflection in German Sign Language. In Tina Cambier-Langeveld, Anikó Lipták, Michael Redford, & Erik Jan van der Torre (eds.), Proceedings of ConSole VII, –. Bergen: University of Bergen. Goad, Heather & Lydia White. . Ultimate attainment of L inflection: Effects of L prosodic structure. In Susan Foster-Cohen, Mike Sharwood Smith, Antonella Sorace, & Mits Ota (eds.), Eurosla Yearbook, vol. , –. Amsterdam/Philadelphia: John Benjamins. Goad, Heather, Lydia White, & Jeffrey Steele. . Missing surface inflection in L acquisition: a prosodic account. In Proceedings of BUCLD. Somerville, MA: Cascadilla Press. Goddard, Ives. . Primary and secondary stem derivation in Algonquian. International Journal of American Linguistics (). –. Göksel, Asli & Celia Kerslake. . Turkish: A comprehensive grammar. London and New York: Routledge. Goldberg, Adele. . Constructions. A Construction Grammar approach to argument structures. Chicago: The University of Chicago Press. Goldberg, Adele. . Constructions: A new theoretical approach to language. Trends in Cognitive Studies (). –. Goldberg, Adele. . Constructions at work: The nature of generalizations in language. Oxford: Oxford University Press. Goldberg, Adele. . Constructionist approaches. In Thomas Hoffmann & Graeme Trousdale (eds.), The Oxford handbook of Construction Grammar, –. Oxford: Oxford University Press. Goldschneider, Jennifer & Robert DeKeyser. . Explaining the ‘natural order of L morpheme acquisition’ in English: A meta-analysis of multiple determinants. Language Learning (). –. Goldsmith, John A. a. On information theory, entropy and phonology in the th century. Folia Linguistica . –. Goldsmith, John A. b. Unsupervised learning of the morphology of a natural language. Computational Linguistics (). –. Goldsmith, John. . An algorithm for the unsupervised learning of morphology. Natural Language Engineering . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Golston, Chris. . Direct Optimality Theory: Representation as pure markedness. Language . –. Gonnerman, Laura M., Mark S. Seidenberg, & Elaine S. Andersen. . Graded semantic and phonological similarity effects in priming: Evidence for a distributed connectionist approach to morphology. Journal of Experimental Psychology: General . –. Good, Jeff. . The linguistic typology of templates. Cambridge: Cambridge University Press. Goodglass, Harold. . Understanding aphasia. San Diego, CA: Academic Press. Gordon, Matthew. . Syllable weight: Phonetics, phonology, and typology. Los Angeles: UCLA PhD dissertation. Gouskova, Maria. . The reduplicative template in Tonkawa. Phonology . –. Grainger, Jonathan, Pascale Colé, & Juan Segui. . Masked morphological priming in visual word recognition. Journal of Memory and Language (). –. Grainger, Jonathan & Walter J. B. van Heuven. . Modeling letter position coding in printed word perception. In Patrick Bonin (ed.), The mental lexicon, –. New York: Nova Science Publishers. Grech, Roberta, Tracey Cassar, Joseph Muscat, Kenneth P. Camilleri, Simon G. Fabri, Michalis Zervakis, Petros Xanthopoulos, Vangelis Sakkalis, & Bart Vanrumste. . Review on solving the inverse problem in EEG source analysis. Journal of NeuroEngineering and Rehabilitation . . Green, David. . Mental control and the bilingual lexico-semantic system. Bilingualism: Language and Cognition (). –. Greenberg, Joseph H. . A quantitative approach to the morphological typology of language. In Robert F. Spencer (ed.), Method and perspective in anthropology, –. Minneapolis: Minnesota University Press. Greenberg, Joseph H. . A quantitative approach to the morphological typology of language. International Journal of American Linguistics . –. Greenberg, Joseph H. /. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph H. Greenberg (ed.), Universals of language. nd edn. , –. Cambridge, MA: MIT Press. Greenberg, Joseph H. . Language universals, with special reference to feature hierarchies (Janua Linguarum, Series Minor ). The Hague: Mouton. Grimshaw, Jane. . On the lexical representation of Romance reflexive clitics. In Joan Bresnan (ed.), The mental representation of grammatical relations, –. Cambridge, MA: MIT Press. Grosjean, François. . A study of timing in a manual and a spoken language: American Sign Language and English. Journal of Psycholinguistic Research (). –. Grosjean, Francois. . An attempt to isolate, and then differentiate, transfer and interference. International Journal of Bilingualism . –. Guilfoyle, Eithne & Máire Noonan. . Functional categories and language acquisition. Canadian Journal of Linguistics (). –. Gumnior, Heidi, Jens Bölte, & Pienie Zwitserlood. . A chatterbox is a box: Morphology in German word production. Language and Cognitive Processes . –. Gunter, Thomas C., Laurie A. Stowe, & Gusbertus Mulder. . When syntax meets semantics. Psychophysiology . –. Gurevich, Olga I. . Construction morphology: The Georgian version. Berkeley: University of California-Berkeley PhD dissertation. Gussmann, Edmund. . Review of The theory of Lexical Phonology by K.P. Mohanan. Journal of Linguistics (). –. Hacken, Pius ten. . Chomskyan linguistics and its competitors. London: Equinox. Hacken, Pius ten. . Early generative approaches. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Hacken, Pius ten. . Delineating derivation and inflection. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –, Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Hacken, Pius ten. . Three analyses of compounding: A comparison. In Pius ten Hacken (ed.), The semantics of compounding, –. Cambridge: Cambridge University Press. Hagoort, Peter, Colin Brown, & Jolanda Groothusen. . The syntactic positive shift as an ERP measure of syntactic processing. Language and Cognitive Processing . –. Hahn, Ulrike & Ramin C. Nakisa. . German inflection: Single route or dual route? Cognitive Psychology (). –. Haiman, John. a. Dictionaries and encyclopedias. Lingua . –. Haiman, John. b. Hua: A Papuan language of the eastern highlands of New Guinea. Amsterdam/ Philadelphia: John Benjamins. Hale, Kenneth & Samuel Jay Keyser. . On argument structure and the lexical expression of syntactic relations. In Kenneth Hale & Samuel Jay Keyser (eds.), The view from building : Essays in linguistics in honor of Sylvain Bromberger (Current Studies in Linguistics ), –. Cambridge, MA: MIT Press. Hale, Kenneth & Samuel Jay Keyser. . Prolegomenon to a theory of argument structure. Cambridge, MA: MIT Press. Hale, Mark & Charles Reiss. . The phonological enterprise. Oxford: Oxford University Press. Hall, Christopher. . Integrating diachronic and processing principles in explaining the suffixing preference. In John A. Hawkins (ed.), Explaining language universals, –. Cambridge: Cambridge University Press. Halle, Morris. . Prolegomena to a theory of word formation. Linguistic Inquiry (). –. Halle, Morris. . Impoverishment and fission. In Benjamin Bruening, Yoonjung Kang, & Martha Jo McGinnis (eds.), PF: Papers at the interface. MIT Working Papers in Linguistics . –. Cambridge, MA: MIT Press. Halle, Morris & Alec Marantz. . Distributed morphology and the pieces of inflection. In Kenneth Hale & Samuel Jay Keyser (eds.), The view from building : Essays in linguistics in honor of Sylvain Bromberger, –. Cambridge, MA: MIT Press. Halle, Morris & Karuvannur P. Mohanan. . Segmental phonology of Modern English. Linguistic Inquiry . –. Halpern, Aaron. . Clitics. In Andrew Spencer & Arnold M. Zwicky (eds.), The handbook of morphology, –. Oxford: Blackwell. Halvorsen, Per-Kristian & Ronald M. Kaplan. . Projections and semantic description in LexicalFunctional Grammar. Proceedings of the International Conference on Fifth Generation Computer Systems (FGCS-), –. Tokyo. Hammarström, Harald & Lars Borin. . Unsupervised learning of morphology. Computational Linguistics (). –. Hammond, Michael & Michael Noonan (eds.). . Theoretical morphology. Approaches in modern linguistics. San Diego, CA: Academic Press. Harder, Peter. . Meaning in mind and society: A functional contribution to the social turn in cognitive linguistics. Berlin: De Gruyter Mouton. Harder, Rita. . Meervoud in de NGT. Manuscript, Nederlands Gebarencentrum. Harley, Heidi. . Subjects, events, and licensing. Cambridge, MA: MIT PhD dissertation. Harley, Heidi. . Compounding in Distributed Morphology. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Harley, Heidi. . On the identity of roots. Theoretical Linguistics (–). –. Harley, Heidi & Rolf Noyer. . State-of-the-article: Distributed Morphology. Glot International (). –. Harley, Heidi & Rolf Noyer. . Formal versus encyclopedic properties of vocabulary: Evidence from nominalisations. In Bert Peeters (ed.), The lexicon–encyclopedia interface, –. Oxford: Elsevier. Harley, Heidi & Elizabeth Ritter. . Person and number in pronouns: A feature-geometric analysis. Language (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Harley, Heidi & Mercedes Tubino Blanco. . Cycles, vocabulary items, and stem forms in Hiaki. In Ora Matushansky & Alec Marantz (eds.), Distributed Morphology today. Cambridge, MA: MIT Press. Harm, Michael W. & Mark S. Seidenberg. . Phonology, reading acquisition, and dyslexia: Insights from connectionist models. Psychological Review (). –. Harnisch, Rüdiger. . Grundform- und Stamm-Prinzip in der Substantivmorphologie des Deutschen. Heidelberg: Winter. Harris, Alice. . Revisiting anaphoric islands. Language . –. Harris, Alice. . Multiple exponence. Oxford: Oxford University Press. Harris, Zellig. . Morpheme alternants in linguistic analysis. Language . –. Harris, Zellig. . Discontinuous morphemes. Language . –. Harris, Zellig. . From morpheme to utterance. Language . –. Harris, Zellig. . Methods in structural linguistics. th impression , entitled Structural linguistics. Chicago/London: The University of Chicago Press. Harris, Zellig. . From phoneme to morpheme. Language . –. Harris, Zellig. . From morpheme to utterance. In Martin Joos (ed.), Readings in Linguistics I, th edn, –. Chicago: The University of Chicago Press [reprint of Harris ]. Harris, Zellig. . A theory of language and information. Oxford: Clarendon Press. Hartmann, Stefan. a. The rise and fall of word-formation patterns: A historical cognitive-linguistic approach to word-formation change. In Paula Rodríguez-Puente, Teresa Fanego, Evelyn GandónChapela, Sara María Riveiro-Outeiral, & María Luisa Roca-Varela (eds.), Current research in applied linguistics: Issues on language and cognition, –. Newcastle: Cambridge Scholars Publishing. Hartmann, Stefan. b. Wortbildungswandel im Spiegel der Sprachtheorie: Paradigmen, Konzepte, Methoden. In Vilmos Ágel & Andreas Gardt (eds.), Paradigmen der aktuellen Sprachgeschichtsforschung, –. (Jahrbuch für Germanistische Sprachgeschichte ). Berlin: De Gruyter Mouton. Haspelmath, Martin. . Schemas in Hausa plural formation: Product-orientation, and motivation vs. source-orientation and generation. Buffalo Working Papers in Linguistics . –. Haspelmath, Martin. . Grammaticalization theory and heads in morphology. In Mark Aronoff (ed.), Morphology now, –. Albany, NY: State University of New York Press. Haspelmath, Martin. . Word-class-changing inflection and morphological theory. In Geert Booij and Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Haspelmath, Martin. . Understanding morphology. London: Arnold. Haspelmath, Martin. . The geometry of grammatical meaning: Semantic maps and crosslinguistic comparison. In Michael Tomasello (ed.), The new psychology of language, vol. II, –. Mahwah, NJ: Erlbaum. Haspelmath, Martin. . Pre-established categories don’t exist: Consequences for language description and typology. Linguistic Typology (). –. Haspelmath, Martin. . An empirical test of the Agglutination Hypothesis. In Sergio Scalise, Elisabetta Magni, & Antonietta Bisetto (eds.), Universals of language today, –. Dordrecht: Springer. Haspelmath, Martin. . Framework-free grammatical theory. In Bernd Heine & Heiko Narrog (eds.), The Oxford handbook of linguistic analysis, –. Oxford: Oxford University Press. Haspelmath, Martin. . Comparative concepts and descriptive categories in crosslinguistic studies. Language (). –. Haspelmath, Martin. . The indeterminacy of word segmentation and the nature of morphology and syntax. Folia Linguistica (). –. Haspelmath, Martin. . How to compare major word-classes across the world’s languages. In Thomas Graf, Denis Paperno, Anna Szabolcsi, & Jos Tellings (eds.), Theories of everything: in honor of Edward Keenan, – (UCLA Working Papers in Linguistics ). Los Angeles: UCLA. Haspelmath, Martin. . Defining vs. diagnosing linguistic categories: A case study of clitic phenomena. In Joanna Błaszczak, Dorota Klimek-Jankowska, & Krzysztof Migdalski (eds.), How categorical are categories?, –. Berlin: De Gruyter Mouton.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Haspelmath, Martin & Andrea Sims. . Understanding morphology, nd edn. London: Hodder Education. Hathout, Nabil, Fabio Montermini, & Ludovic Tanguy. . Extensive data for morphology: Using the World Wide Web. Journal of French Language Studies (). –. Hathout, Nabil & Fiammetta Namer. . Discrepancy between form and meaning in wordformation: The case of over- and under-marking in French. In Franz Rainer, Francesco Gardani, Hans Christian Luschützky, & Wolfgang U. Dressler (eds.), Morphology and meaning: Selected papers from the th International Morphology Meeting, Vienna, February , –. Amsterdam/ Philadelphia: John Benjamins. Haugen, Einar. . The analysis of linguistic borrowing. Language . –. Haugen, J. . Morphology at the interfaces: Reduplication and noun incorporation in Uto-Aztecan. Amsterdam/Philadelphia: John Benjamins. Haugen, Jason. . Reduplication in Distributed Morphology. Proceedings of the th Arizona Linguistics Circle Conference (ALC ). Coyote Papers vol. . Tucson, AZ: Department of Linguistics, University of Arizona. Haugen, Jason. . Readjustment rejected? In Daniel Siddiqi & Heidi Harley (eds.), Morphological metatheory, –. Amsterdam/Philadelphia: John Benjamins. Haugen, Jason & Daniel Siddiqi. . Roots and the derivation. Linguistic Inquiry (). –. Haugen, Jason & Daniel Siddiqi. . Towards a restricted realization theory: Multimorphemic monolistemicity, portmanteaux, and post-linearization spanning. In Daniel Siddiqi & Heidi Harley (eds.), Morphological metatheory, –. Amsterdam/Philadelphia: John Benjamins. Hauser, Marc D., Noam Chomsky, & W. Tecumseh Fitch. . The faculty of language: What is it, who has it, and how did it evolve? Science . –. Hawkins, John A. & Anne Cutler. . Psycholinguistic factors in morphological asymmetry. In John A. Hawkins (ed.), Explaining language universals, –. Cambridge: Cambridge University Press. Hawkins, Roger & Cecilia Yuet-hung Chan. . The partial availability of Universal Grammar in second language acquisition: The ‘failed functional features hypothesis’. Second Language Research . –. Hay, Jennifer. . Lexical frequency in morphology: Is everything relative? Linguistics (). –. Hay, Jennifer. . From speech perception to morphology: Affix-ordering revisited. Language . –. Hay, Jennifer. . Causes and consequences of word structure. London/New York: Routledge. Hay, Jennifer B. & Harald Baayen. . Parsing and productivity. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Hay, Jennifer B. & Harald Baayen. . Shifting paradigms: Gradient structure in morphology. Trends in Cognitive Sciences (). –. Hay, Jennifer & Ingo Plag. . What constrains possible suffix combinations? On the interaction of grammatical and processing restrictions in derivational morphology. Natural Language and Linguistic Theory (). –. Hayes, Bruce. . Metrical stress theory: Principles and case studies. Chicago: The University of Chicago Press. Haynie, Hannah, Claire Bowern, & Hannah LaPalombara. . Sound symbolism in the languages of Australia. PLoS ONE (). e. DOI: https://doi.org/./journal.pone.. Heine, Bernd, Ulrike Claudi, & Friederike Hünnemeyer. . Grammaticalization: A conceptual framework. Chicago: The University of Chicago Press. Heine, Bernd & Heiko Narrog. (eds.). . The Oxford handbook of linguistic analysis, nd edn. Oxford: Oxford University Press. Hempel, Carl G. . Aspects of scientific explanation and other essays in the philosophy of science. New York: Free Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Henri, Fabiola. . A constraint-based approach to verbal constructions in Mauritian. Paris: University of Mauritius and Université Paris Diderot PhD dissertation. Henson, Richard N. . Short-term memory for serial order: The start-end model. Cognitive Psychology . –. Henzen, Walter. . Deutsche Wortbildung. Dritte, durchgesehene und ergänzte Auflage. (Sammlung kurzer Grammatiken germanischer Dialekte). Tübingen: Niemeyer. Herlofsky, William J. . What you see is what you get: Iconicity and metaphor in the visual language of written and signed poetry: A cognitive poetic approach. In Wolfgang G. Müller & Olga Fischer (eds.), From sign to signing, –. Amsterdam/Philadelphia: John Benjamins. Herrmann, Annika & Markus Steinbach (eds.). . Nonmanuals in sign language. vol. . Amsterdam/Philadelphia: John Benjamins. Hill, Jane H. . A grammar of Cupeño. Berkeley: University of California Press. Hilpert, Martin. . Constructional change in English: Developments in allomorphy, word formation, and syntax. Cambridge: Cambridge University Press. Hilpert, Martin. . Construction grammar and its application to English. Edinburgh: Edinburgh University Press. Hilpert, Martin. . From hand-carved to computer-based. Noun-participle compounding and the upward strengthening hypothesis. Cognitive Linguistics (). –. Himmelmann, Nikolaus P. . Lexicalization and grammaticization: Opposite or orthogonal? In Walter Bisang, Nikolaus Himmelmann, & Björn Wiemer (eds.), What makes grammaticalization? A look from its fringes and its components, –. Berlin: De Gruyter Mouton. Himmelmann, Nikolaus P. . Asymmetries in the prosodic phrasing of function words: Another look at the suffixing preference. Language (). –. Hinton, Leanne, Johanna Nichols, & John J. Ohala (eds.). . Sound symbolism. Cambridge: Cambridge University Press. Hippisley, Andrew. . The word as a universal category. In John R. Taylor (ed.), The Oxford handbook of the word, –. Oxford: Oxford University Press. Hittmair-Delazer, Margarete, Barbara Andree, Carlo Semenza, Ria De Bleser, & Thomas Benke (). Naming by German compounds. Journal of Neurolinguistics . –. Hjelmslev, Louis. . La structure morphologique (Types de système). Rapports du Ve Congrès international des linguistes, –. Hjelmslev, Louis.  []. Prolegomena to a theory of language. Translated by Francis J. Whitfield. Madison, WI: University of Wisconsin Press. Hockett, Charles F. . Problems of morphemic analysis. Language (). –. Hockett, Charles F. . A formal statement of morphemic analysis. Studies in Linguistics (). –. Hockett, Charles F. . Review of The mathematical theory of communication by Claude L. Shannon and Warren Weaver. Language . –. Hockett, Charles F. . Two models of grammatical description. Word (–). –. Hockett, Charles F. . A manual of phonology. Bloomington: Indiana University Publications in Anthropology and Linguistics, Memoir . Hockett, Charles F. . A course in modern linguistics. New York: Macmillan. Hockett, Charles F. a. Problems of morphemic analysis. In Martin Joos (ed.), Readings in linguistics I, th edn, –. Chicago: The University of Chicago Press [reprint; original version: Hockett ]. Hockett, Charles F. b. Two models of grammatical description. In Martin Joos (ed.), Readings in Linguistics I, th edn, –. Chicago: The University of Chicago Press. [reprint; original version: Hockett ]. Hockett, Charles F. . The Yawelmani basic verb. Language . –. Hockett, Charles F. (ed.). . A Leonard Bloomfield anthology. Chicago: The University of Chicago Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Hockett, Charles F. . Refurbishing our foundations: Elementary linguistics from an advanced point of view. Amsterdam/Philadelphia: John Benjamins. Hoeksema, Jack. . Head types in morpho-syntax. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Hoekstra, Teun & Nina Hyams. . Aspects of root infinitives. Lingua . –. Hoffmann, Thomas & Graeme Trousdale (eds.). . The Oxford handbook of Construction Grammar. Oxford: Oxford University Press. Hohenberger, Annette, Daniela Happ, & Helen Leuninger. . Modality-dependent aspects of sign language production: Evidence from slips of the hands and their repairs in German sign language. In Richard P. Meier, Kearsy Cormier, & David Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, –. Cambridge: Cambridge University Press. Holm, J. . Pidgins and Creoles. Theory and structure, vol. . Cambridge: Cambridge University Press. Holmes, Virginia M. & J. Kevin O’Regan. . Reading derivationally affixed French words. Language and Cognitive Processes . –. Hoppe, Gabriele. . Das Präfix ex-. Beiträge zur Lehn-Wortbildung: Mit einer Einführung in den Gegenstandsbereich von Gabriele Hoppe und Elisabeth Link. Tübingen: Gunter Narr. Hopper, Paul J. & Elizabeth Closs Traugott. . Grammaticalization. nd edn. Cambridge: Cambridge University Press. Hornstein, Norbert. . A theory of syntax: Basic operations and UG. Cambridge: Cambridge University Press. Hornstein, Norbert, Jairo Nunes, & Kleanthes Grohmann. . Understanding Minimalism. Cambridge: Cambridge University Press. Hruschka, Daniel J., Morten H. Christiansen, Richard A. Blythe, William Croft, Paul Heggarty, Salikoko S. Mufwene, Janet B. Pierrehumbert, & Shana Poplack. . Building social cognitive models of language change. Trends in Cognitive Sciences (). –. Hsieh, Li, Laurence Leonard, & Lori Swanson. . Some differences between English plural noun inflections and third singular verb inflections in the input: The contributions of frequency, sentence position, and duration. Journal of Child Language . –. Huck, Geoffrey J. & John A. Goldsmith. . Ideology and linguistic theory: Noam Chomsky and the Deep Structure debates. London: Routledge. Hudson, Richard. . Word Grammar. Oxford: Blackwell. Hudson, Richard. . Zwicky on heads. Journal of Linguistics . –. Hudson, Richard. . English word grammar. Oxford: Blackwell. Hudson, Richard. . English subject–verb agreement. English Language and Linguistics (). –. Hudson, Richard. . *I amn’t. Language . –. Hudson, Richard. . Gerunds without phrase structure. Natural Language and Linguistic Theory. . –. Hudson, Richard. . Language networks: Towards a new Word Grammar. Oxford: Oxford University Press. Hudson, Richard. . An introduction to Word Grammar. Oxford: Oxford University Press. Hudson, Richard. . French pronouns in cognition. In Nikolas Gisborne & Andrew Hippisley (eds.), Defaults in morphological theory, –. Oxford: Oxford University Press. Hulst, Harry van der. . On the parallel organization of linguistic components. Lingua . –. Hulstijn, Jan, Rod Ellis, & Søren Eskildsen. . Orders and sequences in the acquisition of L morphosyntax,  years on: An introduction to the Special Issue. Language Learning (). –. Humboldt, Wilhelm von.  []. On language: On the diversity of human language construction and its influence on the mental development of the human species [orig. Über die Verschiedenheit des menschlichen Sprachbaus und seinen Einfluss auf die geistige Entwicklung des

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Menschengeschlechts], edited by Michael Losonsky, translated by Peter Heath. Cambridge: Cambridge University Press. Hüning, Matthias. . Het tegaan van een morfologische categorie: over het Middelnederlandse verbaalprefix te-. In Ariane van Santen & Marijke van der Wal (eds.), Taal in tijd en ruimte: Voor Cor van Bree bij zijn afscheid als hoogleraar Historische Taalkunde en Taalvariatie aan de Vakgroep Nederlands van de Rijksuniversiteit Leiden, –. Leiden: Stichting Neerlandistiek Leiden. Hüning, Matthias. . Woordensmederij. De geschiedenis van het suffix -erij. (LOT International Series ). The Hague: Holland Academic Graphics. Hüning, Matthias. . Monica en andere gates. Het ontstaan van een morfologisch procédé. Nederlandse taalkunde (). –. Hüning, Matthias. . Semantic niches and analogy in word formation: Evidence from contrastive linguistics. Languages in Contrast (). –. Hüning, Matthias & Geert Booij. . From compounding to derivation: The emergence of derivational affixes through “constructionalization”. Folia Linguistica (). –. Special issue on Refining grammaticalization edited by Ferdinand von Mengden & Horst Simon. Hüning, Matthias & Barbara Schlücker. . Konvergenz und Divergenz in der Wortbildung. Komposition im Niederländischen und im Deutschen. In Antje Dammel, Sebastian Kürschner, & Damaris Nübling (eds.), Kontrastive Germanistische Linguistik, vol. , –. Hildesheim, Zürich, New York: Georg Olms Verlag. Hüning, Matthias & Barbara Schlücker. . Multi-word expressions. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton. Hyman, Larry M. . Suffix ordering in Bantu: A morphocentric approach. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Hyman, Larry M. a. The natural history of verb stem reduplication in Bantu. Morphology . –. Hyman, Larry M. b. How (not) to do phonological typology: The case of pitch-accent. Language Sciences (–). –. Special issue on Data and theory: Papers in phonology in celebration of Charles W. Kisseberth edited by Michael J. Kenstowicz. Hyman, Larry M. . In defense of Prosodic Typology: A response to Beckman & Venditti. Linguistic Typology (). –. Hyman, Larry M., Sharon Inkelas, & Galen Sibanda. . Morpho-syntactic correspondence in Bantu reduplication. In Kristin Hanson & Sharon Inkelas (eds.), The nature of the word: Essays in honor of Paul Kiparsky, –. Cambridge, MA: MIT Press. Hymes, Dell & John Fought. . American Structuralism. The Hague: Mouton. Iacobini, Claudio & Francesca Masini. . The emergence of verb-particle constructions in Italian: Locative and actional meanings. Morphology (). –. Igartua, Iván. . From cumulative to separative exponence in inflection: Reversing the morphological cycle. Language (). –. Indefrey, Peter. . The spatial and temporal signatures of word production components: A critical update. Frontiers in Psychology . . Indefrey, Peter & Willem J. M. Levelt. . The spatial and temporal signatures of word production components. Cognition . –. Ingram, John C. L. . Neurolinguistics. An introduction to spoken language processing and its disorders. Cambridge: Cambridge University Press. Inhoff, Albrecht W., Deborah Briihl, & Jill Schwartz. . Compound word effects differ in reading, on-line naming, and delayed naming tasks. Memory & Cognition . –. Inkelas, Sharon. . The theoretical status of morphologically conditioned phonology: A case study of dominance effects. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Inkelas, Sharon. . Morphological Doubling Theory: Evidence for morphological doubling in reduplication. In Bernhard Hurch (ed.), with editorial assistance of Veronika Mattes, Studies on reduplication, –. Berlin: De Gruyter Mouton. Inkelas, Sharon. . A dual theory of reduplication. Linguistics . –. Inkelas, Sharon. . Reduplication. In Jochen Trommer (ed.), The morphology and phonology of exponence, –. Oxford: Oxford University Press. Inkelas, Sharon. . Non-concatenative derivation: Reduplication. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press. Inkelas, Sharon & Laura J. Downing. . What is reduplication? Typology and analysis part /: The typology of reduplication. Language and Linguistics Compass (). –. Inkelas, Sharon & Cemil Orgun. . Level (non)-ordering in recursive morphology: Evidence from Turkish. In Steven Lapointe, Diane Brentari, & Patrick Farrell (eds.), Morphology and its relation to phonology and syntax, –. Stanford, CA: CSLI. Inkelas, Sharon & Cheryl Zoll. . Reduplication: Doubling in morphology. Cambridge: Cambridge University Press. Inkelas, Sharon & Cheryl Zoll. . Is grammar dependence real? A comparison between cophonological and indexed constraint approaches to morphologically conditioned phonology. Linguistics . –. Isel, Frédéric, Thomas C. Gunter, & Angela D. Friederici. . Prosody-assisted head-driven access to spoken German compounds. Journal of Experimental Psychology: Learning, Memory, and Cognition . –. Ito, Junko. . Prosodic minimality in Japanese. Proceedings of CLS , volume : Parasession on the Syllable in Phonetics and Phonology, –. Ito, Junko & Armin Mester. . Japanese phonology. In John Goldsmith (ed.), The handbook of phonological theory, –. Oxford: Blackwell. Iverson, Gregory & Joseph C. Salmons. . Aspiration and laryngeal representation in Germanic. Phonology . –. Jackendoff, Ray. . Morphological and semantic regularities in the lexicon. Language (). – [reprinted in Jackendoff b, –]. Jackendoff, Ray. . Semantics and cognition. Cambridge, MA: MIT Press. Jackendoff, Ray. . Semantic structures. Cambridge, MA: MIT Press. Jackendoff, Ray. . The architecture of the language faculty. Cambridge, MA: MIT Press. Jackendoff, Ray. . Foundations of language: Brain, meaning, grammar, evolution. Oxford: Oxford University Press. Jackendoff, Ray. . A Parallel Architecture perspective on language processing. Brain Research . –. Jackendoff, Ray. . Construction after construction and its theoretical challenges. Language (). –. Jackendoff, Ray. a. Compounding in the Parallel Architecture and Conceptual Semantics. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Jackendoff, Ray. b. The Parallel Architecture and its place in cognitive science. In Bernd Heine & Heiko Narrog (eds.), The Oxford handbook of linguistic analysis, –. Oxford: Oxford University Press. Jackendoff, Ray. a. The ecology of English noun-noun compounds. In Ray Jackendoff (b), Meaning and the lexicon: The Parallel Architecture –, –. Oxford: Oxford University Press. Jackendoff, Ray. b. Meaning and the lexicon: The Parallel Architecture –. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Jackendoff, Ray. a. Conceptual semantics. In Claudia Maienborn, Klaus von Heusinger, & Paul Portner (eds.), Semantics: An international handbook of natural language meaning, vol. , –. Berlin: De Gruyter Mouton. Jackendoff, Ray. b. What is the human language faculty? Two views. Language (). –. Jackendoff, Ray. . Constructions in the Parallel Architecture. In Thomas Hoffmann & Graeme Trousdale (eds.), The Oxford handbook of Construction Grammar, –. Oxford: Oxford University Press. Jackendoff, Ray & Jenny Audring. Forthcoming. The Texture of the Lexicon: Relational Morphology in the Parallel Architecture. Oxford: Oxford University Press. Jacques, Guillaume & Anton Antonov. . Direct/inverse systems. Language and Linguistics Compass (). –. Jaeger, Jeri J., Alan H. Lockwood, David L. Kemmerer, Robert D. Van Valin Jr, Brian W. Murphy, & Hanif G. Khalak. . A positron emission tomography study of regular and irregular verb morphology in English. Language . –. Jakobson, Roman. . Quest for the essence of language. Diogenes . – [reprinted in Selected writings, The Hague: Mouton, , vol. , –]. Jakobson, Roman. . Selected writings II: Word and language. The Hague: Mouton. Jakobson, Roman. . On language. Cambridge, MA: Harvard University Press. Janda, Richard D. . ‘Morphemes’ aren’t something that grows on trees: Morphology as more the phonology than the syntax of words. In John F. Richardson, Mitchell Marks, & Amy Chukerman (eds.), Papers from the parasession on the interplay of phonology, morphology, and syntax, –. Chicago: Chicago Linguistic Society. Janse, Mark. . Animacy, definiteness and case in Cappadocian and other Asia Minor dialects. Journal of Greek Linguistics . –. Janse, Mark. Forthcoming. Cappadocian. In Christos Tzitzilis (ed.), The Greek language and its dialects. Thessaloniki: Institute of Modern Greek Studies. Janssen, Niels, Yanchao Bi, & Alfonso Caramazza. . A tale of two frequencies: Determining the speed of lexical access in Mandarin Chinese and English compounds. Language and Cognitive Processes . –. Jarema, Gonia, Céline Busson, Rossitza Nikolova, Kyrana Tsapkini, & Gary Libben. . Processing compounds: A cross-linguistic study. Brain and Language . –. Jarema, Gonia & Gary Libben (eds.). . The mental lexicon: Core perspectives. Amsterdam: Elsevier. Jelinek, Frederick. . Statistical methods for speech recognition. Cambridge, MA: MIT Press. Jerde, Thomas E., John F. Soechting, & Martha Flanders. . Coarticulation in fluent fingerspelling. The Journal of Neuroscience (). –. Jescheniak, Jörg D., Herbert Schriefers, Merrill F. Garrett, & Angela D. Friederici. . Exploring the activation of semantic and phonological codes during speech planning with event-related brain potentials. Journal of Cognitive Neuroscience . –. Ji, Hongbo, Christina L. Gagné, & Thomas L. Spalding. . Benefits and costs of lexical decomposition and semantic integration during the processing of transparent and opaque English compounds. Journal of Memory and Language . –. Joanisse, Marc F. & Mark S. Seidenberg. . Imaging the past: neural activation in frontal and temporal regions during regular and irregular past-tense processing. Cognitive, Affective and Behavioral Neuroscience . –. Job, Remo & Giuseppe Sartori. . Morphological decomposition: Evidence from crossed phonological dyslexia. The Quarterly Journal of Experimental Psychology . –. Johanson, Lars & Martine Robbeets (eds.). . Copies versus Cognates in Bound Morphology. Leiden: Brill. Johnson, C. Douglas. . Formal aspects of phonological description. The Hague: Mouton.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Johnston, Trevor A. . Formational and functional characteristics of pointing signs in a corpus of Auslan (Australian Sign Language): Are the data sufficient to posit a grammatical class of ‘pronouns’ in Auslan?, Corpus Linguistics and Linguistic Theory (). –. Johnston, Trevor A. & Adam Schembri. . On defining lexeme in a signed language. Sign Language and Linguistics (). –. Jong, Nivja H. de, Laurie B. Feldman, Robert Schreuder, Matthew Pastizzo, & Harald Baayen. . The processing and representation of Dutch and English compounds: Peripheral morphological and central orthographic effects. Brain and Language . –. Joos, Martin (ed.). . Readings in linguistics I: The development of descriptive linguistics in America –, th edn. Chicago: The University of Chicago Press. Joseph, Brian D. . Diachronic morphology. In Andrew Spencer & Arnold M. Zwicky (eds.), The handbook of morphology, –. Oxford: Blackwell. Joseph, Brian D. . A localistic approach to universals and variation. In Peter Siemund (ed.), Linguistic universals and language variation, –. Berlin: De Gruyter Mouton. Joseph, John E. . Saussure. Oxford: Oxford University Press. Joshi, Aravind K. . An introduction to Tree-Adjoining Grammars. In Alexis Manaster-Ramer (ed.), Mathematics of language, –. Amsterdam/Philadelphia: John Benjamins. Juhasz, Barbara J. . The processing of compound words in English: Effects of word length on eye movements during reading. Language and Cognitive Processes . –. Juhasz, Barbara J., Matthew S. Starr, Albrecht W. Inhoff, & Lars Placke. . The effects of morphology on the processing of compound words: Evidence from naming, lexical decisions and eye fixations. British Journal of Psychology . –. Julien, Marit. . Syntactic word formation in Northern Sámi. Tromsø: Novus Press. Julien, Marit. . Syntactic heads and word formation. Oxford: Oxford University Press. Julien, Marit. . Word. In Keith Brown (ed.), Encyclopedia of language and linguistics, nd edn, –. Oxford: Elsevier. Juola, Patrick. . Measuring linguistic complexity: The morphological tier. Journal of Quantitative Linguistics . –. Justus, Timothy, Jary Larsen, Paul de Mornay Davies, & Diane Swick. . Interpreting dissociations between regular and irregular past-tense morphology: Evidence from event-related potentials. Cognitive, Affective, & Behavioral Neuroscience . –. Kaan, Edith, Anthony Harris, Edward Gibson, & Phillip Holcomb. . The P as an index of syntactic integration difficulty. Language and Cognitive Processes . –. Kaczer, Laura, Kalinka Timmer, Luz Bavassi, & Niels O. Schiller. . Long-lag priming effects of novel and existing compounds on naming familiar objects reflect memory consolidation processes: a combined behavioral and ERP study. Brain Research . –. Kager, René. . On foot templates and root templates. In Marcel den Dikken & Kees Hengeveld (eds.), Linguistics in the Netherlands , –. Amsterdam/Philadelphia: John Benjamins. Kager, René. . Optimality Theory. Cambridge: Cambridge University Press. Kager, René, Harry van der Hulst, & Wim Zonneveld (eds.). . The Prosody–Morphology Interface. Cambridge: Cambridge University Press. Kaisse, Ellen. . Word-formation and phonology. In Pavol Štekauer & Rochelle Lieber (eds.), Handbook of word-formation. –. Dordrecht: Springer. Kaisse, Ellen & Sharon Hargus. . Introduction. In Sharon Hargus & Ellen Kaisse (eds.), Phonetics and Phonology vol. : Studies in Lexical Phonology, –. San Diego, CA: Academic Press. Kan, Irene P. & Sharon L. Thompson-Schill. . Effect of name agreement on prefrontal activity during overt and covert picture naming. Cognitive, affective and behavioral neuroscience . –. Kapatsinski, Vsevolod. . Conspiring to mean: Experimental and computational evidence for a usage-based harmonic approach to morphophonology. Language (). –. Kaplan, Ronald M. & Joan Bresnan. . Lexical Functional Grammar: A formal system for grammatical representation. In Joan Bresnan (ed.), The mental representation of grammatical relations, –. Cambridge, MA: MIT Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Kaplan, Ronald & Miriam Butt. . The morphology–syntax interface in LFG. Paper presented at the LFG Conference, Athens, Greece. Kaplan, Ronald M. & Martin Kay. . Phonological rules and finite-state transducers. Paper presented to the Winter Meeting of the Linguistic Society of America, New York. Kaplan, Ronald M. & Martin Kay. . Regular models of phonological rule systems. Computational Linguistics (). –. Karanastasis, Anastasios. . Grammatiki ton Ellinikon Idiomaton tis Kato Italias [Grammar of the Greek Dialects of South Italy]. Athens: Academy of Athens. Karatsareas, Petros. . The loss of grammatical gender in Cappadocian Greek. Transactions of the Philological Society (). –. Karatsareas, Petros. . A study of Cappadocian Greek nominal morphology from a diachronic and dialectological perspective. Cambridge: University of Cambridge PhD dissertation. Karttunen, Lauri. . Computing with Realizational Morphology. In Alexander Gelbukh (ed.), Computational Linguistics and intelligent text processes: Proceedings of the th International Conference CICLing , –. Berlin: Springer. Karttunen, Lauri, Ronald M Kaplan, & Annie Zaenen. . Two-level morphology with composition. Proceedings of the th conference on Computational Linguistics, volume , –. Association for Computational Linguistics. Kastovsky, Dieter. . Word-formation: A functional view. Folia Linguistica . –. Kathol, Andreas. . Linearization-based German syntax. Columbus: Ohio State University PhD dissertation. Kathol, Andreas. . Agreement and the syntax–morphology interface in HPSG. In Robert Levine & Georgia Green (eds.), Studies in Current Phrase Structure Grammar, –. Cambridge: Cambridge University Press. Kathol, Andreas. . Linear Syntax. Oxford: Oxford University Press. Kathol, Andreas & Carl J. Pollard. . Extraposition via complex domain formation. Proceedings of the rd Annual Meeting of the ACL, –. Association for Computational Linguistics. Kathol, Andreas, Adam Przepiórkowski, & Jesse Tseng. . Advanced topics in Head-Driven Phrase Structure Grammar. In Robert D. Borsley & Kersti Börjars (eds.), Non-transformational syntax: Formal and explicit models of grammar, –. Oxford: Wiley-Blackwell. Katz, Jerrold J. & Paul M. Postal. . An integrated theory of linguistic descriptions. Cambridge, MA: MIT Press. Kay, Paul. . Words and the grammar of context. Stanford, CA: CSLI. Kay, Paul. . An informal sketch of a formal architecture for Construction Grammar. Grammars . –. Kay, Paul & Charles J. Fillmore. . Grammatical Constructions and linguistic generalizations: The What’s X doing Y? construction. Language (). –. Kayne, Richard S. . Movement and silence. New York: Oxford University Press. Keane, Jon, Diane Brentari, & Jason Riggle. . Coarticulation in ASL Fingerspelling. In Stefan Keine & Sjayne Sloggett (eds.), Proceedings from the North East Conference Linguistic Society (NELS) , vol. : –. Kel’makov, Valentin & Sara Hännikäinen. . Udmurtin kielioppia ja harjoituksia. Helsinki: Suomalais-Ugrilainen Seura. Keller, Jörg. . Aspekte der Raumnutzung in der Deutschen Gebärdensprache. Hamburg: Signum. Keller, Rudi. . Sprachwandel. (UTB für Wissenschaft, Uni-Taschenbücher ). Tübingen: Francke Verlag. Kerge, Krista. . The Estonian agent nouns: Grammar versus lexicon. Sprachtypologie und Universalienforschung (). –. Keuleers, Emmanuel & Walter Daelemans. . Memory-based learning models of inflectional morphology: A methodological case study. Lingue e Linguaggio VI(). –. Special issue on Psycho-computational issues in morphology and processing edited by Vito Pirrelli.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Keuleers, Emmanuel, Dominiek Sandra, Walter Daelemans, Steven Gillis, Gert Durieux, & Evelyn Martens. . Dutch plural inflection: The exception that proves the analogy. Cognitive Psychology (). –. Kibort, Anna & Greville G. Corbett. . Grammatical features inventory. University of Surrey. DOI: ./SMG./. Kielar, Aneta & Marc F. Joanisse. . The role of semantic and phonological factors in word recognition: An ERP cross-modal priming study of derivational morphology. Neuropsychologia . –. Kihm, Alain. . Kriyol syntax: The Portuguese-based creole language of Guinea-Bissau. Amsterdam/Philadelphia: John Benjamins. Kihm, Alain. . Inflectional categories in creole languages. In Ingo Plag (ed.), Phonology and morphology in creole languages, –. Tübingen: Niemeyer. Kilani-Schoch, Marianne. . Introduction à la morphologie naturelle. Berne: Peter Lang. Kilani-Schoch, Marianne & Wolfgang U. Dressler. . Morphologie naturelle et flexion du verbe français. Tübingen: Gunter Narr. King, Tracy Holloway. . Configuring topic and focus in Russian. Stanford, CA: CSLI. Kinoshita, Sachiko & Dennis Norris. . Letter order is not coded by open bigrams. Journal of Memory and Language (). –. Kiparsky, Paul. . ‘Elsewhere’ in phonology. In Stephen R. Anderson & Paul Kiparsky (eds.), A festschrift for Morris Halle, –. New York: Harper & Row. Kiparsky, Paul. a. From cyclic phonology to lexical phonology. In Harry van der Hulst & Norval Smith (eds.), The structure of phonological representations (Part I), –. Dordrecht: Foris. Kiparsky, Paul. b. Lexical morphology and phonology. In In-Seok Yang (ed.), Linguistics in the morning calm, –. Seoul: Hanshin Publishing. Kiparsky, Paul. . Word formation and the lexicon. Proceedings of the  Mid-America Linguistics Conference. –. Kiparsky, Paul. . Some consequences of lexical phonology. Phonology Yearbook . –. Klamer, Marian. . Spelling out clitics in Kambera. Linguistics . –. Klamer, Marian. . A grammar of Kambera. Berlin: De Gruyter Mouton. Klein, Wolfgang & Clive Perdue. . The basic variety (or: Couldn’t natural languages be much simpler?). Second Language Research . –. Klima, Edward S. & Ursula Bellugi. . The Signs of Language. Cambridge, MA: Harvard University Press. Koda, Keiko. . Cross-linguistic variations in L morphological awareness. Applied Psycholinguistics . –. Koefoed, Geert & Jaap van Marle. . Fundamental concepts. In Geert Booij, Christian Lehmann, Joachim Mugdan, & Stavros Skopeteas (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Koenig, Jean Pierre. . Lexical relations. Stanford, CA: CSLI. Koenig, Jean-Pierre & Daniel Jurafsky. . Type underspecification and on-line type construction in the lexicon. In Raul Aranovich, William Byrne, Susanne Preuss, & Martha Senturia (eds.), Proceedings of WCCFL , –. Stanford, CA: CSLI. Koester, Dirk, Thomas C. Gunter, & Susanne Wagner. . The morphosyntactic decomposition and semantic composition of German compound words investigated by ERPs. Brain and Language . –. Koester, Dirk, Thomas C. Gunter, Susanne Wagner, & Angela D. Friederici. . Morphosyntax, prosody, and linking elements: The auditory processing of German nominal compounds. Journal of Cognitive Neuroscience . –. Koester, Dirk & Niels O. Schiller. . Morphological priming in overt language production: Electrophysiological evidence from Dutch. NeuroImage . –. Koester, Dirk & Niels O. Schiller. . The functional neuroanatomy of morphology in language production. NeuroImage . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Kohonen, Teuvo. . Self-organizing maps. Berlin: Springer. Kooij, Els van der. . Phonological categories in sign language of the Netherlands: The role of phonetic implementation and iconicity. Utrecht: LOT. Koopman, Hilda. . The syntax of verbs. Dordrecht: Foris. Köpcke, Klaus-Michael. . Schemas in German plural formation. Lingua . –. Koptjevskaja-Tamm, Maria. . A Mozart sonata and the Palme murder. The structure and uses of proper-name compounds in Swedish. In Kersti Börjars, David Denison, & Alam K. Scott (eds.), Morphosyntactic categories and the expression of possession, –. Amsterdam/Philadelphia: John Benjamins. Korotkova, Natalia & Yury Lander. . Deriving suffix ordering in polysynthesis: Evidence from Adyghe. Morphology . –. Koskenniemi, Kimmo. . Two-level Morphology: A general computational model for word-form recognition and production. Publication , Department of General Linguistics, University of Helsinki. Kossmann, Maarten. . Contact-induced change. In Matthew Baerman (ed.), The Oxford handbook of inflection, –. Oxford, New York: Oxford University Press. Kostić, Aleksandar, Tanja Marković, & Aleksandar Baucal. . Inflectional morphology and word meaning: Orthogonal or co-implicative domains? In Harald Baayen & Robert Schreuder (eds.), Morphological structure in language processing, –. Berlin: De Gruyter Mouton. Kourbetis, Vassilis & Robert J. Hoffmeister. . Name signs in Greek sign language. American Annals of the Deaf (). –. Koutsoukos, Nikos & Maria Pavlakou. . A construction morphology account of agent nouns in Modern Greek. Patras Working Papers in Linguistics . –. Kouwenberg, Silvia. . A grammar of Berbice Dutch Creole. Berlin: De Gruyter Mouton. Kouwenberg, Silvia. . Twice as meaningful. Reduplication in pidgins, creoles and other contact languages. London: Battlebridge. Kouwenberg, Silvia. . Early morphology in Berbice Dutch and source language access in creolisation. Word Structure (). –. Kouwenberg, Silvia & Darlene La Charité. . The meanings of ‘more of the same’. Iconicity in reduplication and the evidence for substrate transfer in the genesis of Caribbean Creole Languages. In Silvia Kouwenberg (ed.), Twice as meaningful. Reduplication in pidgins, creoles and other contact languages, –. London: Battlebridge. Kouwenberg, Silvia & Darlene LaCharité. . Echoes of Africa: Reduplication in Caribbean Creole and Niger-Congo languages. Journal of Pidgin and Creole Languages . –. Kouwenberg, Silvia & Darlene LaCharité. . The typology of Caribbean creole reduplication. Journal of Pidgin and Creole Languages (). –. Koziol, Herbert. . Handbuch der englischen Wortbildungslehre. Zweite, neubearbeitete Auflage. (Germanische Bibliothek. Erste Reihe: Sprachwissenschaftliche Lehr- und Elementarbücher). Heidelberg: Carl Winter Universitätsverlag. Krashen, Stephen. . Some issues relating to the Monitor Model. In H. Douglas Brown, Carlos Alfredo Yorio, & Ruth H. Crymes (eds.), On TESOL’, –. Washington, DC: TESOL. Kratzer, Angelika. . Severing the external argument from the verb. In Johann Rooryck & Laurie Zaring (eds.), Phrase structure and the lexicon, –. Dordrecht: Kluwer. Krieger, Hans-Ulrich. . Derivation without lexical rules. Research Report RR--, DFKI Saarbrücken. Krieger, Hans Ulrich & John Nerbonne. . Feature-based inheritance networks for computational lexicons. Research Report RR--, DFKI Saarbrücken. Krieger, Hans-Ulrich & John Nerbonne. . Feature based inheritance networks for computational lexicons. In Ted Briscoe, Valeria de Paiva, & Ann Copestake (eds.), Inheritance, defaults and the lexicon, –. Cambridge: Cambridge University Press. Kroeger, Paul. . Analyzing grammar: An introduction. Cambridge: Cambridge University Press. Ktejik, Mish. . Numeral incorporation in Japanese Sign Language. Sign Language Studies (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Kubuş, Okan. . An analysis of Turkish Sign Language (TİD) phonology and morphology. Ankara: Middle East Technical University MA thesis. Kuperman, Victor. . Accentuate the positive: Semantic access in English compounds. Frontiers in Psychology . . Kuperman, Victor, Raymond Bertram, & Harald Baayen. . Morphological dynamics in compound processing. Language and Cognitive Processes . –. Kuperman, Victor, Raymond Bertram, & Harald Baayen. . Processing trade-offs in the reading of Dutch derived words. Journal of Memory and Language . –. Kuperman, Victor, Robert Schreuder, Raymond Bertram, & Harald Baayen. . Reading polymorphemic Dutch compounds: Toward a multiple route model of lexical processing. Journal of Experimental Psychology: Human Perception and Performance (). –. Kuperman, Victor & Julie Van Dyke. . Individual differences in visual comprehension of morphological complexity. In Laura Carlson, Christoph Hoelscher, & Thomas F. Shipley (eds.), Proceedings of the rd Annual Meeting of the Cognitive Science Society, –. Austin, TX: Cognitive Science Society. Kuperman, Victor & Julie Van Dyke. . Reassessing word frequency as a determinant of word recognition for skilled and unskilled readers. Journal of Experimental Psychology: Human Perception and Performance (). –. Kupść, Anna. . An HPSG grammar of Polish clitics. Warszawa: Polish Academy of Sciences and Université Paris  PhD dissertation. Kupść, Anna & Jesse Tseng. . A new HPSG approach to Polish auxiliary constructions. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, –. Stanford, CA: CSLI. Kuryłowicz, Jerzy. . La nature des procès dits ‘analogiques’. Acta Linguistica . –. Kutas, Marta & Steven A. Hillyard. . Reading senseless sentences: brain potentials reflect semantic incongruity. Science . –. Kutas, Marta & Steven A. Hillyard. . Brain potentials during reading reflect word expectancy and semantic association. Nature . –. Kwon, Nahyun. . Total reduplication in Japanese ideophones: An exercise in Localized Canonical Typology. Glossa: A Journal of General Linguistics (). Art. . –. Kwon, Nahyun & Erich R. Round. . Phonaesthemes in morphological theory. Morphology . –. Labov, William. . Language in the inner city: Studies in the black English vernacular. Philadelphia: University of Pennsylvania Press. Laca, Brenda. . Derivation. In Martin Haspelmath, Ekkehard König, Wulf Oesterreicher, & Wolfgang Raible (eds.), Language typology and language universals. An international handbook, vol. , –. Berlin: De Gruyter Mouton. Lakatos, Imre. . Falsifications and the methodology of scientific research programs. In Imre Lakatos & Alan Musgrave (eds.), Criticism and the growth of knowledge, –. Cambridge: Cambridge University Press. Lakoff, George. . Irregularity in syntax. New York: Holt, Rinehart & Winston. Lakoff, George. . On generative semantics. In Danny D. Steinberg & Leon A. Jakobovits (eds.), Semantics: An interdisciplinary reader in philosophy, linguistics and psychology, –. Cambridge: Cambridge University Press. Lakoff, George. . Women, fire, and dangerous things: What categories reveal about the mind. Chicago/London: The University of Chicago Press. Lamb, Sydney. . Outline of Stratificational Grammar. Washington, DC: Georgetown University Press. Lander, Yury. . Word formation in Adyghe. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Lander, Yury. . Nominal complex in West Circassian: Between morphology and syntax. Studies in Language (). –. Lander, Yury & Alexander Letuchiy. . Decreasing valency-changing operations in a valencyincreasing language? In Albert Álvarez González & Ia Navarro (eds.), Verb valency changes: theoretical and typological perspectives, –. Amsterdam/Philadelphia: John Benjamins. Lang, Jürgen. . Das Verbalsystem des kapverdischen Kreols (Variante von Santiago). In Matthias Perl, Axel Schönberger, & Petra Thiele (eds.), Portugiesisch-basierte Kreolsprachen, –. Frankfurt: TFM. Langacker, Ronald W. . Foundations of Cognitive Grammar, vol. : Theoretical prerequisites. Stanford, CA: Stanford University Press. Langacker, Ronald W. . Concept, image, and symbol: The cognitive basis of grammar. Berlin: De Gruyter Mouton. Langacker, Ronald W. . Foundations of Cognitive Grammar, vol. : Descriptive application. Stanford, CA: Stanford University Press. Langacker, Ronald W. . Grammar and conceptualization. Berlin: De Gruyter Mouton. Langacker, Ronald W. . Construction grammars: Cognitive, radical, and less so. In Francisco J. Ruiz de Mendoza Ibáñez & M. Sandra Peña Cervel (eds.), Cognitive linguistics: Internal dynamics and interdisciplinary interaction, –. Berlin: De Gruyter Mouton. Langacker, Ronald W. . Cognitive grammar: A basic introduction. New York: Oxford University Press. Langacker, Ronald W. a. Investigations in Cognitive Grammar. Berlin: De Gruyter Mouton. Langacker, Ronald W. b. Constructions and constructional meaning. In Vyvyan Evans & Stéphanie Pourcel (eds.), New directions in cognitive linguistics, –. Amsterdam/Philadelphia: John Benjamins. Langacker, Ronald W. c. A dynamic view of usage and language acquisition. Cognitive Linguistics . –. Langacker, Ronald W. . How not to disagree: The emergence of structure from usage. In Kasper Boye & Elisabeth Engberg-Pedersen (eds.), Language usage and language structure, –. Berlin: De Gruyter Mouton. Langacker, Ronald W. . Metaphor in linguistic thought and theory. Cognitive Semantics (). –. Lapointe, Steven. . A theory of grammatical agreement. Amherst: University of MassachusettsAmherst PhD dissertation. Lapointe, Steven. . Constraints on autolexical analyses. Linguistic Analysis . –. Lappe, Sabine. . English Prosodic Morphology. Dordrecht: Springer. Lardiere, Donna. . Dissociating syntax from morphology in a divergent L end-state grammar. Second Language Research (). –. Lardiere, Donna. . Ultimate attainment in Second Language Acquisition: A case study. Mahwah, NJ: Erlbaum. Larsen-Freeman, Diane. . The acquisition of grammatical morphemes by adult ESL students. TESOL Quarterly (). –. Larsen-Freeman, Diane & Michael H. Long. . An introduction to second language acquisition research. New York: Longman. Lasnik, Howard. . Verbal morphology: Syntactic structures meets the Minimalist program. In Hector Campos & Paula Kempchinsky (eds.), Evolution and revolution in linguistic theory: Essays in honor of Carlos Otero, –. Washington, DC: Georgetown University Press. Lasnik, Howard & Terje Lohndal. . Government-binding/principles and parameters theory. Wiley Interdisciplinary Reviews: Cognitive science . –. Lasnik, Howard & Terje Lohndal. . Brief overview of the history of generative grammar. In Marcel den Dikken (ed.), The Cambridge handbook of generative syntax, –. Cambridge: Cambridge University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Laudanna, Alessandro & Cristina Burani. . Address mechanisms to decomposed lexical entries. Linguistics . –. Laudanna, Alessandro & Cristina Burani. . Distributional properties of derivational affixes: Implications for processing. In Laurie B. Feldman (ed.), Morphological aspects of language processing, –. Hillsdale, NJ: Erlbaum. Lauer, Mark. . Designing statistical language learners: Experiments on noun compounds. Sydney: Macquarie University PhD dissertation. Lavric Aureliu, Heike Elchlepp, & Kathleen Rastle. . Tracking hierarchical processing in morphological decomposition with brain potentials. Journal of Experimental Psychology Human Perception & Performance . –. Lees, Robert. . The grammar of English nominalizations. Bloomington: Indiana University Press and The Hague: Mouton. Lees, Robert. . Problems in the grammatical analysis of English nominal compounds. In Manfred Bierwisch & Karl Erich Heidolph (eds.), Progress in linguistics, –. The Hague: Mouton. Lefebvre, Claire. . Creole genesis and the acquisition of grammar: The case of Haitian Creole. Cambridge: Cambridge University Press. Lefebvre, Claire. . The emergence of productive morphology in creole languages: The case of Haitian Creole. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Léglise, Isabelle & Claudine Chamoreau. . Variation and change in contact settings. In Isabelle Léglise & Claudine Chamoreau (eds.), The interplay of variation and change in contact settings, –. Amsterdam/Philadelphia: John Benjamins. Lehmann, Christian. . Grammatikalisierung und Lexikalisierung. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung (). –. Lehtonen, Minna, Toni Cunillera, Antoni Rodriguez-Fornells, Annika Hulten, Jyrki Tuomainen, & Matti Laine. . Recognition of morphologically complex words in Finnish: Evidence from event-related potentials. Brain Research . –. Lehtonen Minna, Philip Monahan, & David Poeppel. . Evidence for early morphological decomposition: Combining masked priming with magnetoencephalography. Journal of Cognitive. Neuroscience . –. Lemhöfer, Kristin, Dirk Koester, & Robert Schreuder. . When bicycle pump is harder to read than bicycle bell: Effects of parsing cues in first and second language compound reading. Psychonomic Bulletin & Review (). –. Leminen, Alina, Miika Leminen, Minna Lehtonen, Päivi Nevalainen, Sari Ylinen, Lilli Kimppa, Christian Sannemann, Jyrki P. Mäkelä, & Teija Kujala. . Spatiotemporal dynamics of the processing of spoken inflected and derived words: a combined EEG and MEG study. Frontiers in Human Neuroscience . . Lenci, Alessandro. . Distributional semantics in linguistic and cognitive research. Rivista di Linguistica (). –. Lensink, Saskia E., Rinus G. Verdonschot, & Niels O. Schiller. . Morphological priming during language switching: an ERP study. Frontiers in Human Neuroscience . . Léonard, Jean-Léo & Alain Kihm. . Verb inflection in Chiquihuitlán Mazatec: A fragment and a PFM approach. In Stefan Müller (ed.), Proceedings of the th International Conference on HeadDriven Phrase Structure Grammar, Université Paris Diderot, Paris , France, –. Stanford, CA: CSLI. Leonard, Laurence. . Children with Specific Language Impairment. Cambridge, MA: MIT Press. Leonard, Laurence, Jennifer Davis, & Patricia Deevy. . Phonotactic probability and past tense use by children with specific language impairment and their typically developing peers. Clinical Linguistics & Phonetics . –. Lepic, Ryan. . Lexical blends and lexical patterns in English and in American Sign Language. In Jenny Audring, Francesca Masini, & Wendy Sandler (eds.), Quo vadis morphology?—MMM Online Proceedings, –. Universities of Leiden, Bologna, and Haifa.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Lepic, Ryan, Carl Börstell, Gal Belsitzman, & Wendy Sandler. . Taking meaning in hand: Iconic motivations in two-handed signs. Sign Language & Linguistics (). –. Lepschy, Giulio C. . A survey of structural linguistics. London: Faber and Faber. LeSourd, Phil. . On the analytic expression of predicates in Meskwaki. In Donna B. Gerdts, John Moore, & Maria Polinsky (eds.), Hypothesis A/Hypothesis B: Linguistic explorations in honor of David M. Perlmutter, –. Cambridge, MA: MIT Press. Levelt, Willem J. M., Ardi Roelofs, & Antje S. Meyer. . A theory of lexical access in speech production. Behavioral and Brain Sciences . –. Levi, Judith N. . The syntax and semantics of complex nominals. New York: Academic Press. Levin, Beth & Malka Rappaport Hovav. . Unaccusativity: At the syntax–lexical semantics interface. Cambridge, MA: MIT Press. Levy, Yonata. . Other children, other languages: Issues in theory of language acquisition. Hillsdale, NJ: Erlbaum. Li, Yafei. . X-zero: A theory of the morphology–syntax interface. Cambridge, MA: MIT Press. Libben, Gary. . Representation and processing in the second language lexicon: The homogeneity hypothesis. In John A. Archibald (ed.), Second language acquisition and linguistic theory, –. Boston: Blackwell. Libben, Gary. . Everything is psycholinguistics: Material and methodological considerations in the study of compound processing. Canadian Journal of Linguistics . –. Libben, Gary. . Why study compound processing? An overview of the issues. In Gary Libben & Gonia Jarema (eds.), The representation and processing of compound words, –. Oxford: Oxford University Press. Libben, Gary. . Compound words, semantic transparency, and morphological transcendence. Linguistische Berichte, Sonderheft . –. Libben, Gary. . Morphological assessment in bilingual aphasia: Compounding and the language nexus. In Martin R. Gitterman, Mira Goral, & Loraine K. Obler (eds.), Aspects of multilingual aphasia, –. Bristol: Multilingual Matters. Libben, Gary, Martha Gibson, Yeo Bom Yoon, and Dominiek Sandra. . Compound fracture: The role of semantic transparency and morphological headedness. Brain and Language (). –. Libben, Gary & Mira Goral. . How bilingualism shapes the mental lexicon. In John Schweiter (ed.), Cambridge handbook of bilingual processing, –. Cambridge: Cambridge University Press. Libben, Gary & Silke Weber. . Semantic transparency, compounding, and the nature of independent variables. In Franz Rainer, Wolfgang U. Dressler, Francesco Gardani, & Hans Christian Luschützky (eds.), Morphology and meaning, –. Amsterdam/Philadelphia: John Benjamins. Libben, Gary, Chris Westbury, & Gonia Jarema. . The challenge of embracing complexity. In Gary Libben, Gonia Jarema, & Chris Westbury (eds.), Methodological and analytic frontiers in lexical research, –. Amsterdam/Philadelphia: John Benjamins. Liceras, Juana M. & Lourdes Diaz. . Triggers in L acquisition: The case of Spanish N–N compounds. Studia Linguistica . –. Liddell, Scott K. . American Sign Language syntax. The Hague: Mouton. Liddell, Scott K. . Numeral incorporating roots & non-incorporating prefixes in American Sign Language. Sign Language Studies . –. Liddell, Scott K. . Indicating verbs and pronouns: Pointing away from agreement. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima, –. Mahwah, NJ: Erlbaum. Liddell, Scott K. . Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Liddell, Scott K. & Robert E. Johnson. . American Sign Language compound formation processes, lexicalization and phonological remnants. Natural Language and Linguistic Theory . –. Liddell, Scott K. & Robert E. Johnson. . American Sign Language: The phonological base. Sign Language Studies . –. Lieber, Rochelle. . On the organization of the lexicon. Cambridge, MA: MIT PhD dissertation.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Lieber, Rochelle. . On the organization of the lexicon. Bloomington: Indiana University Linguistics Club. Lieber, Rochelle. . Argument linking and compounds in English. Linguistic Inquiry . –. Lieber, Rochelle. . Consonant gradation in Fula: An autosegmental approach. In Mark Aronoff & Richard Oehrle (eds.), Language sound structure, –. Cambridge, MA: MIT Press. Lieber, Rochelle. . An integrated theory of autosegmental processes. Albany: SUNY Press. Lieber, Rochelle. . Phrasal compounds in English and the morphology–syntax interface. In Diane Brentari, Gary Larson, & Lynn MacLeod (eds.), CLS -II, Papers from the Parasession on Agreement in grammatical theory, –. Chicago: Chicago Linguistic Society. Lieber, Rochelle. . Deconstructing morphology: Word formation in syntactic theory. Chicago: The University of Chicago Press. Lieber, Rochelle. . Morphology and lexical semantics. Cambridge: Cambridge University Press. Lieber, Rochelle. . The category of roots and the roots of categories: What we learn from selection in derivation. Morphology . –. Lieber, Rochelle. a. A lexical semantic approach to compounding. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Lieber, Rochelle. b. IE, Germanic: English. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Lieber, Rochelle. . On the lexical semantics of compounds: Non-affixal (de)verbal compounds. In Sergio Scalise & Irene Vogel (eds.), Cross-disciplinary issues in compounding, –. Amsterdam/Philadelphia: John Benjamins. Lieber, Rochelle. a. Theoretical approaches to derivation. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press. Lieber, Rochelle. b. Methodological issues in studying derivation. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press. Lieber, Rochelle. . Word-formation in generative grammar. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton. Lieber, Rochelle & Sergio Scalise. . The Lexical Integrity Hypothesis in a new theoretical universe. Lingue e Linguaggio (I). –. Lieber, Rochelle & Pavol Štekauer. a. Introduction: Status and definition of compounding. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Lieber, Rochelle & Pavol Štekauer (eds.). b. The Oxford handbook of compounding. Oxford: Oxford University Press. Lieber, Rochelle & Pavol Štekauer (eds.). . The Oxford handbook of derivational morphology. Oxford: Oxford University Press. Lieberman, Moti. . The moderation of L morphological comprehension by transferred prosodic structures. Paper presented at New Sounds , Concordia University, Montreal, QC, May . Lieven, Elena. . Input and first language acquisition: Evaluating the role of frequency. Lingua . –. Lightbown, Patsy M. & Nina Spada. . Focus-on-form and corrective feedback in communicative language teaching: effects on second language learning. Studies in Second Language Acquisition (). –. Lillo-Martin, Diane & Richard P. Meier. . On the linguistic status of ‘agreement’ in sign languages. Theoretical Linguistics (–). –. Lima, Susan D. & Alexander Pollatsek. . Lexical access via an orthographic code? The basic orthographic syllabic structure (BOSS) reconsidered. Journal of Verbal Learning and Verbal Behavior . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Lin, Yuh-Huey. . Syllable simplification strategies: A stylistic perspective. Language Learning (). –. Lindsay, Mark & Mark Aronoff. . Natural selection in self-organizing morphological systems. In Nabil Hathout, Fabio Montermini, & Jesse Tseng (eds.), Selected Proceedings of the th Décembrettes: Morphology in Toulouse, –. München: LINCOM Europa. Longtin, Catherine-Marie & Fanny Meunier. . Morphological decomposition in early visual word processing. Journal of Memory and Language . –. Longtin, Catherine-Marie, Juan Segui, & Pierre A. Hallé. . Morphological priming without morphological relationship. Language and Cognitive Processes . –. Lounsbury, Floyd. . Oneida verb morphology (Yale University Publications in Anthropology ). New Haven: Yale University Press. Lowe, John. . English possessive ’s: Clitic and affix. Natural Language and Linguistic Theory (). –. Łozińska (née Fabisiak), Sylwia & Paweł Rutkowski. . Iconicity in Polish Sign Language. In Teresa Dobrzyńska & Raya Kuncheva (eds.), Vision and cognition in language, literature and culture, –. Sofia: Bulgarian Academy of Sciences. Lucas, Ceil, Robert Bayley, & Clayton Valli. . What’s your sign for pizza?: An introduction to variation in American Sign Language. Washington, DC: Gallaudet University Press. Luís, Ana R. . Clitics as morphology. Colchester: University of Essex PhD dissertation. Luís, Ana R. . Tense marking and inflectional morphology in Indo-Portuguese creoles. In Susanne Michaelis (ed.), Roots of creole structures: Weighing the contribution of substrates and superstrates, –. Amsterdam/Philadelphia: John Benjamins. Luís, Ana R. . The loss and survival of inflectional morphology: Contextual vs. inherent inflection in creoles. In Sonia Colina, Antxon Olarrea, & Ana Carvalho (eds.), Romance Linguistics , –. Amsterdam/Philadelphia: John Benjamins. Luís, Ana R. . Morphomic structure and loan-verb integration: Evidence from Lusophone creoles. In Martin Maiden, John Charles Smith, Maria Goldbach, & Marc-Olivier Hinzelin (eds.), Morphological autonomy: Perspectives from Romance inflectional morphology, –. Oxford: Oxford University Press. Luís, Ana R. . The layering of form and meaning in creole word-formation: A view from construction morphology. In Franz Rainer, Francesco Gardani, Hans Christian Luschützky, & Wolfgang U. Dressler (eds.), Morphology and meaning, –. Amsterdam/Philadelphia: John Benjamins. Luís, Ana R. (ed.). . Rethinking creole morphology. Special Issue of Word Structure (). Luís, Ana R. & Ricardo Bermúdez-Otero (eds.). . The morphome debate. Oxford: Oxford University Press. Luís, Ana R. & Ryo Otoguro. . Proclitic contexts in European Portuguese and their effect on clitic placement. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG. Stanford, CA: CSLI. Luís, Ana R. & Ryo Otoguro. . Morphological and syntactic well-formedness: The case of European Portuguese proclitics. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG. Stanford, CA: CSLI. Luís, Ana R. & Ryo Otoguro. . Inflectional morphology and syntax in correspondence: Evidence from European Portuguese. In Alexandra Galani, Glyn Hicks, & George Tsoulas (eds.), Morphology and its Interfaces, –. Amsterdam/Philadelphia: John Benjamins. Luís, Ana R. & Louisa Sadler. . Object clitics and marked morphology. In Claire Beyssade, Olivier Bonami, Patricia Cabredo Hofherr, & Francis Corblin (eds.), Empirical issues in formal syntax and semantics , –. Paris: Presses de l’Université de Paris, Sorbonne. Luís, Ana R. & Andrew Spencer. a. A Paradigm Function account of ‘mesoclisis’ in European Portuguese (EP). In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Luís, Ana R. & Andrew Spencer. b. Udi clitics: A Generalized Paradigm Function Morphology approach. In Ryo Otoguro, Gergana Popova, & Andrew Spencer (eds.), Proceedings of the YorkEssex Morphology Meeting , Essex Research Reports in Linguistics, –. Lukatela, Georgije, Claudia Carello, & Michael T. Turvey. . Lexical representation of regular and irregular inflected nouns. Language and Cognitive Processes . –. Lundquist, Björn. . Nominalizations and participles in Swedish. Tromsø: University of Tromsø PhD dissertation. Luraghi, Silvia. . From non-canonical to canonical agreement. In Hans Amstutz, Andreas Dorn, Matthias Müller, Miriam Ronsdorf, & Sami Uljas (eds.), Fuzzy boundaries. Festschrift für Antonio Loprieno, –. Hamburg: Widmaier. Luschützky, Hans Christian. . Word-formation in natural morphology. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton. Luutonen, Jorma. . The variation of morpheme order in Mari declension. Helsinki: SuomalaisUgrilainen Seura. Luzzatti, Claudio, Klaus Willmes, & Ria De Bleser. . Aachener Aphasie Test: versione italiana, nd edn. Firenze: Organizzazioni Speciali. Lyons, John. . Structural semantics. London: Blackwell. Ma, Wei Ji, Masud Husain, & Paul M. Bays. . Changing concepts of working memory. Nature neuroscience (). –. MacDonald, Jonathan E. . The syntactic nature of inner aspect. A Minimalist perspective. Amsterdam/Philadelphia: John Benjamins. MacFarlane, James & Jill Patterson Morford. . Frequency characteristics of American Sign Language. Sign Language Studies (). –. MacWhinney, Brian. . Rules, rote, and analogy in morphological formations by Hungarian children. Journal of Child Language . –. Magnuson, James S., Daniel Mirman, & Harlan D. Harris. . Computational models of spoken word recognition. In Michael Spivey, Ken McRae, & Marc Joanisse (eds.), The Cambridge handbook of psycholinguistics, –. Cambridge: Cambridge University Press. Mah, Jennifer. . Segmental representations in interlanguage grammars: The case of francophones and English /h/. Montreal: McGill University PhD dissertation. Mahowald, Kyle. . An LFG approach to word order freezing. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG , –. Stanford, CA: CSLI. Maiden, Martin. . Morphological autonomy and diachrony. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Springer. Maiden, Martin. . Where does heteroclisis come from? Evidence from Romanian dialects. Morphology . –. Malouf, Robert. . A head-driven account of long-distance case assignment. In Ronnie Cann, Claire Grover, & Philip Miller (eds.), Grammatical interfaces in HPSG, –. Stanford, CA: CSLI. Manelis, Leon & David A. Tharp. . The processing of affixed words. Memory & Cognition . –. Manova, Stela. . Suffix combinations in Bulgarian: parsability and hierarchy-based ordering. Morphology . –. Manova, Stela (ed.). . Affix ordering across languages and frameworks. Oxford: Oxford University Press. Manova, Stela & Mark Aronoff. . Modelling affix order. Morphology . –. Marantz, Alec. . Re reduplication. Linguistic Inquiry (). –. Marantz, Alec. . Clitics, morphological merger, and the mapping to phonological structure. In Michael Hammond & Michael Noonan (eds.), Theoretical morphology: Approaches in modern linguistics, –. San Diego, CA: Academic Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Marantz, Alec. . Case and licensing. In Proceedings of the Eastern States Conference on Linguistics, vol. , –. Cambridge, MA: MIT. Marantz, Alec. . No escape from syntax: Don’t try morphological analysis in the privacy of your own lexicon. In Alexis Dimitriadis & Laura Siegel (eds.), Proceedings of the st Annual Penn Linguistics Colloquium. University of Pennsylvania Working Papers in Linguistics (). –. Philadelphia: University of Pennsylvania. Marantz, Alec. . Words. Manuscript, MIT. Marantz, Alec. . Generative linguistics within the cognitive neuroscience of language. The Linguistic Review . –. Marantz, Alec. . Phases and words. In Sook-Hee Choe (ed.), Phases in the theory of grammar, –. Soeul: Dong-In Publishing Co. Marantz, Alec. . No escape from morphemes in morphological processing. Language and Cognitive Processes (). –. Maratsos, Michael P. . More overregularizations after all. Journal of Child Language . –. Marchand, Hans. . The categories and types of present-day English word-formation: A synchronic– diachronic approach. nd edn. München: Beck. Marchman, Virginia & Elizabeth Bates. . Continuity in lexical and morphological development: A test of the Critical Mass Hypothesis. Journal of Child Language . –. Marcus, Gary. . Children’s overregularization of English plurals: a quantitative analysis. Journal of Child Language . –. Marcus, Gary, Ursula Brinkmann, Harald Clahsen, Richard Wiese, & Steven Pinker. . German inflection: The exception that proves the rule. Cognitive Psychology . –. Marcus, Gary, Steven Pinker, Michael Ullman, Michelle Hollander, T. John Rosen, Fei Xu, & Harald Clahsen. . Overregularization in language acquisition. Monographs of the Society for Research in Child Development (). –. Marelli, Marco, Silvia Aggujaro, Franco Molteni, & Claudio Luzzatti. . Understanding the mental lexicon through neglect dyslexia: A study on compound noun reading. Neurocase . –. Marle, Jaap van. . On the paradigmatic dimension of morphological creativity. Dordrecht: Foris. Marshall, Chloe & Heather van der Lely. . A challenge to current models of past tense inflection: The impact of phonotactics. Cognition . –. Marshall, Chloe & Heather van der Lely. . The impact of phonological complexity on past tense inflection in children with grammatical-SLI. Advances in Speech Language Pathology . –. Marslen-Wilson, William, Lorraine Komisarjevsky Tyler, Rachelle Waksler, & Lianne Older. . Morphology and meaning in the English mental lexicon. Psychological Review . –. Martin, Jack. . Subtractive morphology as dissociation. Proceedings of the West Coast Conference on Formal Linguistics . –. Martinet, André.  []. Elements of General Linguistics. Translated by Elisabeth Palmer. Chicago: The University of Chicago Press. Marušič, Franc Lanko, Andrew Ira Nevins, & William Badecker. . The grammars of conjunction agreement in Slovenian. Manuscript, University of Nova Gorica, University College London, University of Arizona. Marvin, Tatjana. . English syllabification and schwa-insertion: From the sound pattern of English to the notion of phase. Linguistica XLV. –. Marzi, Claudia, Marcello Ferro, Franco Alberto Cardillo, & Vito Pirrelli. . Effects of frequency and regularity in an integrative model of word storage and processing. Italian Journal of Linguistics (). –. Marzi, Claudia, Marcello Ferro, & Vito Pirrelli. . Morphological structure through lexical parsability. Lingue e Linguaggio XIII(). –. Masini, Francesca. . Multi-word expressions between syntax and the lexicon: The case of Italian verb–particle constructions. SKY Journal of Linguistics . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Masini, Francesca. . Phrasal lexemes, compounds and phrases: A constructionist perspective. Word Structure (). –. Masini, Francesca. . Parole sintagmatiche in italiano. Roma: Caissa Italia. Masini, Francesca. Forthcoming. Competition between morphological words and multiword expressions. In Franz Rainer, Francesco Gardani, Wolfgang U. Dressler & Hans Christian Luschützky (eds.), Competition in inflection and wordformation. Cham: Springer. Masini, Francesca & Valentina Benigni. . Phrasal lexemes and shortening strategies in Russian: The case for constructions. Morphology (). –. Masini, Francesca & Claudio Iacobini. . Schemas and discontinuity in Italian: The view from Construction Morphology. In Geert Booij (ed.), The construction of words. Advances in Construction Morphology. Dordrecht: Springer. Maslen, Robert J. C., Anna L. Theakston, Elena V. Lieven, & Michael Tomasello. . A dense corpus study of past tense and plural overregularization in English. Journal of Speech, Language and Hearing Research . –. Mateu, Jaume. . Argument structure: Relational construal at the syntax interface. Barcelona: Universitat Autónoma de Barcelona PhD dissertation. Mathur, Gaurav & Christian Rathmann. . Two types of nonconcatenative morphology in signed languages. In Gaurav Mathur & Donna Jo Napoli (eds.), Deaf around the world: The impact of language, –. Oxford: Oxford University Press. Matras, Yaron. . Language contact. Cambridge: Cambridge University Press. Matthews, Peter H. . The inflectional component of a word-and-paradigm grammar. Journal of Linguistics . –. Matthews, Peter H. . Recent developments in morphology. In John Lyons (ed.), New horizons in linguistics, –. Harmondsworth: Penguin. Matthews, Peter H. . Morphology. Cambridge: Cambridge University Press. Matthews, Peter H. . Inflectional morphology: A theoretical study based on aspects of Latin verb conjugation, nd edn. Cambridge: Cambridge University Press. Matthews, Peter H. . Morphology. nd edn. Cambridge: Cambridge University Press. Matthews, Peter H. . Grammatical theory in the United States from Bloomfield to Chomsky. Cambridge: Cambridge University Press. Matthews, Peter H. . A short history of structural linguistics. Cambridge: Cambridge University Press. Mayerthaler, Willi. . Studien zur theoretischen und französischen Morphologie. Tübingen: Niemeyer. Mayerthaler, Willi. . Morphologische Natürlichkeit. Wiesbaden: Athenäum [English translation: Morphological naturalness. Ann Arbor: Karoma Press, ]. Mayerthaler, Willi. . System-independent morphological naturalness. In Wolfgang U. Dressler, Willi Mayerthaler, Oskar Panagl, & Wolfgang U. Wurzel (eds.), Leitmotifs in Natural Morphology, –. Amsterdam/Philadelphia: John Benjamins. Mayfield, James & Paul McNamee. . Single n-gram stemming. In SIGIR ’: Proceedings of the th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, –. New York. McCarthy, John J. . Formal problems in Semitic phonology and morphology. Cambridge, MA: MIT PhD dissertation. McCarthy, John J. . A prosodic theory of non-concatenative morphology. Linguistic Inquiry (). –. McCarthy, John J. . Prosodic structure and expletive infixation. Language . –. McCarthy, John J. . Prosodic organization in morphology. In Mark Aronoff & Richard T. Oehrle (eds.), Language and sound structure, –. Cambridge, MA: MIT Press. McCarthy, John J. . The prosody of phase in Rotuman. Natural Language and Linguistic Theory . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





McCarthy, John J. . A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. McCarthy, John J. & Alan S. Prince. . Prosodic morphology . Report no. RuCCS-TR-. New Brunswick, NJ: Rutgers University Center for Cognitive Science. McCarthy, John J. & Alan S. Prince. . Prosodic morphology and templatic morphology. In Mushira Eid & John J. McCarthy (eds.), Perspectives on Arabic Linguistics II: Papers from the Second Annual Symposium on Arabic Linguistics, –. Amsterdam/Philadelphia: John Benjamins. McCarthy, John J. & Alan S. Prince. a. Prosodic morphology: Constraint interaction and satisfaction. Report no. RuCCS-TR-. New Brunswick, NJ: Rutgers University Center for Cognitive Science [Rutgers Optimality Archive #–]. McCarthy, John J. & Alan S. Prince. b. Generalized alignment. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. McCarthy, John J. & Alan S. Prince. a. The emergence of the unmarked: Optimality in Prosodic Morphology. Proceedings of the North East Linguistic Society . –. McCarthy, John J. & Alan S. Prince. b. Two lectures on Prosodic Morphology. Presented at the University of Utrecht,  &  June . (ROA no. –). McCarthy, John J. & Alan S. Prince. a. Faithfulness and reduplicative identity. Papers in Optimality Theory. UMOP . –. McCarthy, John J. & Alan S. Prince. b. Prosodic morphology. In John A. Goldsmith (ed.), The handbook of phonological theory, –. Cambridge, MA: Blackwell. McCarthy, John J. & Alan S. Prince. . Faithfulness and identity in Prosodic Morphology. In René Kager, Harry van der Hulst, & Wim Zonneveld (eds.), The prosody–morphology interface, –. Cambridge: Cambridge University Press. McCawley, James D. . Lexical Insertion in a Transformational Grammar without Deep Structure. Papers from the fourth regional meeting, Chicago Linguistic Society, pp. – [reprinted as McCawley , –]. McCawley, James D. . Grammar and meaning: papers on syntactic and semantic topics. New York: Academic Press. McClelland, James L. & David E. Rumelhart. . An interactive activation model of context effects in letter perception: Part . An account of basic findings. Psychological Review . –. McCormick, Samantha F., Kathleen Rastle, & Matthew H. Davis. . Is there a ‘fete’ in ‘fetish’? Effects of orthographic opacity on morpho-orthographic segmentation in visual word recognition. Journal of Memory and Language . –. McDonald, D. B. . Understanding noun compounds. CMU Technical Report CS-–. McKee, David & Graeme D. Kennedy. . The distribution of signs in New Zealand Sign Language. Sign Language Studies (). –. McKinnon, Richard, Mark Allen, & Lee Osterhout. . Morphological decomposition involving non-productive morphemes: ERP evidence. Neuroreport . –. McMahon, April M. S. . Understanding language change. Cambridge: Cambridge University Press. McNamee, Paul & James Mayfield. . N-gram morphemes for retrieval. In Working Notes for the CLEF  Workshop, Budapest. McWhorter, John. . Identifying the creole prototype: Vindicating a typological class. Language (). –. McWhorter, John. . Defining ‘creole’ as a synchronic term. In Ingrid Neumann-Holzschuh & Edgar W. Schneider (eds.), Degrees of restructuring in creole languages, –. Amsterdam/ Philadelphia: John Benjamins. Megerdoomian, Karine. . The status of the nominal in Persian complex predicates. Natural Language and Linguistic Theory (). –. Meibauer, Jörg. . How marginal are phrasal compounds? Generalized insertion, expressivity, and I/Q-interaction. Morphology . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Meier, Richard P. . A psycholinguistic perspective on phonological segmentation in sign and speech. In Geoffrey R. Coulter (ed.), Phonetics and Phonology Voume : Current issues in American Sign Language phonology, –. San Diego, CA: Academic Press. Meier, Richard P. . The acquisition of verb agreement. In Gary Morgan & Bencie Woll (eds.), Directions in Sign Language Acquisition, –. Amsterdam/Philadelphia: John Benjamins. Meier, Richard P. & Raquel Willerman. . Prelinguistic gesture in deaf and hearing children. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, –. Hillsdale, NJ: Erlbaum. Meir, Irit. . A cross-modality perspective on verb agreement. Natural Language & Linguistic Theory (). –. Meir, Irit. . Question and negation in Israeli Sign Language. Sign Language & Linguistics (). –. Meir, Irit. . Iconicity and metaphor: Constraints on metaphorical extension of iconic forms. Language (). –. Meir, Irit. . Word classes and word formation. In Roland Pfau, Markus Steinbach, & Bencie Woll (eds.), Handbook on sign language linguistics, –. Berlin: De Gruyter Mouton. Meir, Irit. . Grammaticalization is not the full story: A non-grammaticalization account of the emergence of sign language agreement morphemes. In Jenny Audring, Francesca Masini, & Wendy Sandler (eds.), Quo vadis morphology?—MMM On-line Proceedings, –. Universities of Leiden, Bologna, and Haifa. Meir, Irit, Mark Aronoff, Wendy Sandler, & Carol Padden. . Sign languages and compounding. In Sergio Scalise & Irene Vogel (eds.), Cross-disciplinary issues in compounding, –. Amsterdam/Philadelphia: John Benjamins. Meir, Irit, Carol Padden, Mark Aronoff, & Wendy Sandler. . Body as subject. Journal of Linguistics (). –. Meir, Irit & Wendy Sandler. . A language in space: The story of Israeli Sign Language. New York: Erlbaum. Meisel, Jürgen, Harald Clahsen, & Manfred Pienemann. . On determining developmental stages in natural second languages acquisition. Studies in Second Language Acquisition . –. Melnar, Lynette. . Caddo verb morphology. Lincoln, NE: University of Nebraska Press. Méndez Dosuna, Julián & Carmen Pensado (eds.). . Naturalists at Krems. Papers from the Workshop on Natural Phonology and Natural Morphology (Krems – July ). Salamanca: Ediciones Universidad de Salamanca. Merlini Barbaresi, Lavinia. . Evaluative morphology and pragmatics. In Nicola Grandi & Lívia Körtvélyessy (eds.), Edinburgh handbook of evaluative morphology, –. Edinburgh: Edinburgh University Press. Meunier, Fanny & Juan Segui. . Morphological priming effect: The role of surface frequency. Brain and Language . –. Meurers, Walt Detmar. . Towards a semantics for lexical rules as used in HPSG. Proceedings of the Acquilex II Workshop on the Formalisation and Use of Lexical Rules, –. Cambridge, UK. Meurers, Walt Detmar. . On expressing lexical generalizations in HPSG. Nordic Journal of Linguistics (). –. Michael, Lev. . The Nanti reality status system: Implications for the typological validity of the realis/irrealis contrast. Linguistic Typology (). –. Michaelis, Laura. . Type-shifting in Construction Grammar: An integrated approach to aspectual coercion. Cognitive Linguistics (). –. Michaelis, Laura. . Making the case for Construction Grammar. In Hans Boas & Ivan Sag (eds.), Sign-based Construction Grammar, –. Stanford, CA: CSLI. Michaelis, Laura A. . Sign-Based Construction Grammar. In Thomas Hoffmann & Graeme Trousdale (eds.), The Oxford handbook of Construction Grammar, –. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Michelucci, Pascal, Olga Fischer, & Christina Ljungberg (eds.). . Semblance and signification. Amsterdam/Philadelphia: John Benjamins. Mikolov, Tomas, Kai Chen, Greg Corrado, & Jeffrey Dean. . Efficient estimation of word representations in vector space. arXiv preprint arXiv. . . Milin, Petar, Dušica Filipović Đurdjević, & Fermín Moscoso del Prado Martín. . The simultaneous effects of inflectional paradigms and classes on lexical recognition: Evidence from Serbian. Journal of Memory and Language . –. Milin, Petar, Victor Kuperman, Aleksandar Kostić, & Harald Baayen. . Words and paradigms bit by bit: An information-theoretic approach to the processing of inflection and derivation. In James P. Blevins & Juliette Blevins (eds.), Analogy in grammar: Form and acquisition, –. Oxford: Oxford University Press. Miljan, Merilin. . Number in Estonian Sign Language. Trames . –. Miller, Christopher. . Phonologie de la langue des signes québeçoise: Structure simultanée et axe temporal. Montreal: University of Quebec PhD dissertation. Miller, Gary. . Complex verb formation. Amsterdam/Philadelphia: John Benjamins. Miller, Philip. . Clitics and constituents in Phrase Structure Grammar. New York: Garland. Miller, Philip & Ivan Sag. . French clitic movement without clitics or movement. Natural Language and Linguistic Theory (). –. Mirus, Gene, Jami Fisher, & Donna Jo Napoli. . Taboo expressions in American Sign Language. Lingua . –. Mitchell, Erika. . Evidence from Finnish for Pollock’s theory of IP. Linguistic Inquiry (). –. Mithun, Marianne. . The evolution of noun incorporation. Language . –. Mithun, Marianne. a. Incorporation. In Geert Booij, Christian Lehmann, & Joachim Mudgan (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Mithun, Marianne. b. The reordering of morphemes. In Spike Gildea (ed.), Reconstructing grammar. Comparative linguistics and grammaticalization, –. Amsterdam/Philadelphia: John Benjamins. Mohanan, Karuvannur P. . The theory of Lexical Phonology. Dordrecht: Springer. Mohanan, Karuvannur P. & Tara Mohanan. . On representations in grammatical semantics. In Tara Mohanan & Lionel Wee (eds.), Grammatical semantics: Evidence for structure in meaning, –. Stanford, CA: CSLI. Mohanan, Tara. . Wordhood and lexicality: Noun incorporation in Hindi. Natural Language and Linguistic Theory (). –. Monachesi, Paola. . Decomposing Italian clitics. In Sergio Balari & Luca Dini (eds.), Romance in HPSG, –. Stanford, CA: CSLI. Monsell, Stephen. . Repetition and the lexicon. In Andrew W. Ellis (ed.), Progress in the psychology of language, vol. , –. Hillsdale, NJ: Erlbaum. Montermini, Fabio. . A new look on word-internal anaphora on the basis of Italian data. Lingue e Linguaggio V(). –. Montrul, Silvina. . Incomplete acquisition and attrition of Spanish tense/aspect distinctions in adult bilinguals. Bilingualism: Language and Cognition . –. Moravcsik, Edith A. . Reduplicative constructions. In Joseph H. Greenberg (ed.), Universals of human language. Volume : Word structure, –. Stanford, CA: Stanford University Press. Morgan, Hope. . ‘FIFTH’ but not ‘FIVE-DAYS-AGO’: Numeral incorporation in Kenyan Sign Language. Paper presented at TISLR , London , – July . Morgan-Short, Kara, Karsten Steinhauer, Cristina Sanz, & Michael T. Ullman. . Explicit and implicit second language training differentially affect the achievement of native-like brain activation patterns. Journal of Cognitive Neuroscience (). –. Morpurgo Davies, Anna. . Nineteenth-century linguistics, vol. IV. London: Longman.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Morris, Joanna, Tiffany Frank, Jonathan Grainger, & Phillip J. Holcomb. . Semantic transparency and masked morphological priming: An ERP investigation. Psychophysiology . –. Mörth, Karlheinz & Wolfgang U. Dressler. . German plural doublets with and without meaning differentiation. In Franz Rainer, Francesco Gardani, Hans Christian Luschützky, & Wolfgang U. Dressler (eds.), Morphology and meaning: Selected papers from the th International Morphology Meeting, Vienna, February , –. Amsterdam/Philadelphia: John Benjamins. Morton, John & Karalyn Patterson. . A new attempt at an interpretation, or, an attempt at a new interpretation. In Max Coltheart, Karalyn Patterson, & John C. Marshall (eds.), Deep dyslexia, –. London: Routledge & Kegan Paul. Mos, Maria. . Complex lexical items. Tilburg: Tilburg University PhD dissertation. Moscoso del Prado Martín, Fermín. . Co-occurrence and the effect of inflectional paradigms. Lingue e linguaggio (). –. Moscoso del Prado Martín, Fermín, Avital Deutsch, Ram Frost, Robert Schreuder, Nivja H. de Jong, & Harald Baayen. . Changing places: A cross-language perspective on frequency and family size in Dutch and Hebrew. Journal of Memory and Language . –. Moscoso del Prado Martín, Fermín, Aleksandar Kostić, & Harald Baayen. . Putting the bits together: An information-theoretical perspective on morphological processing. Cognition (). –. Mugdan, Joachim. . Units of word-formation. In Peter O. Müller, Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.), Word-formation. An international handbook of the languages of Europe, vol. , –. Berlin: De Gruyter Mouton. Mulder, Kimberley, Ton Dijkstra, Robert Schreuder, & Harald Baayen. . Effects of primary and secondary morphological family size in monolingual and bilingual word processing. Journal of Memory and Language . –. Müller, Peter O., Ingeborg Ohnheiser, Susan Olsen, & Franz Rainer (eds.). . Word-formation. An international handbook of the languages of Europe, vol. . Berlin: De Gruyter Mouton. Müller, Stefan. . The Babel-System. An HPSG fragment for German, a parser, and a dialogue component. Proceedings of the Fourth International Conference on the Practical Application of Prolog, –. London. Müller, Stefan. . Complex predicates: Verbal complexes, resultative constructions, and particle verbs in German. Stanford, CA: CSLI. Müller, Stefan. . Solving the bracketing paradox: An analysis of the morphology of German particle verbs. Journal of Linguistics . –. Müller, Stefan. a. HPSG—A synopsis. In Artemis Alexiadou & Tibor Kiss (eds.), Syntax: Theory and analysis. An international handbook, vol. , –. Berlin: De Gruyter Mouton. Müller, Stefan. b. The Core-Gram Project: Theoretical linguistics, theory development and verification. Journal of Language Modelling (). –. Müller, Stefan. . Default inheritance and derivational morphology. In Martijn Wieling, Martin Kroon, Gertjan van Noord, & Gosse Bouma (eds.), From semantics to dialectometry: Festschrift in Honor of John Nerbonne, –. UK: College Publications. Müller, Stefan & Stephen Wechsler. . Lexical approaches to argument structure. Theoretical Linguistics . –. Munske, Horst Haider. . Wortbildungswandel. In Mechthild Habermann, Peter O. Müller, & Horst Haider Munske (eds.), Historische Wortbildung des Deutschen, –. Tübingen: Niemeyer. Murphy, Gregory. . The big book of concepts. Cambridge, MA: MIT Press. Murrell, Graham A. & John Morton. . Word recognition and morphemic structure. Journal of Experimental Psychology . –. Mutaka, Ngessimo M. & Larry M. Hyman. . Syllables and morpheme integrity in Kinande reduplication. Phonology . –. Muysken, Pieter. . Approaches to affix order. Linguistics . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Myers-Scotton, Carol. . Contact linguistics: Bilingual encounters and grammatical outcomes. Oxford: Oxford University Press. Nakipoglu, Mine & Ketrez, Nihan. . Children’s overregularizations and irregularizations of the Turkish aorist. In David Bamman, Tatiana Magnitskaia, & Colleen Zaller (eds.), Proceedings of the th Annual Boston University Conference on Language Development, –. Somerville, MA: Cascadilla Press. Napoli, Donna Jo, Nathan Sanders, & Rebecca Wright. . On the linguistic effects of articulatory ease, with a focus on sign languages. Language (). –. Napoli, Donna Jo & Rachel Sutton-Spence. . Limitations on simultaneity in sign language. Language (). –. Napoli, Donna Jo & Rachel Sutton-Spence. . Order of the major constituents in sign languages: Implications for all language. Frontiers in Psychology . . Special issue on Language by mouth and by hand edited by Iris Berent & Susan Goldin-Meadow. Napoli, Donna Jo & Jeff Wu. . Morpheme structure constraints on two-handed signs in American Sign Language: notions of symmetry. Sign Language Studies (). –. Nathan, Geoffrey S. . Phonology: A Cognitive Grammar introduction. Amsterdam/Philadelphia: John Benjamins. Nau, Nicole. . Inflection vs. derivation: How split is Latvian morphology? Sprachtypologie und Universalienforschung (). –. Nespor, Marina & Wendy Sandler. . Prosody in Israeli Sign Language. Language and Speech (–). –. Nespor, Marina & Irene Vogel. . Prosodic phonology. Dordrecht: Foris. Nesset, Tore. . Russian conjugation revisited: A cognitive approach to aspects of Russian verb inflection [Tromsø Studies in Linguistics ]. Oslo: Novus Press. Nesset, Tore. . Allomorphy in the usage-based model: The Russian past passive participle. Cognitive Linguistics . –. Nesset, Tore. . Abstract phonology in a concrete model: Cognitive linguistics and the morphology– phonology interface. Berlin: De Gruyter Mouton. Nesset, Tore & Laura A. Janda. . Paradigm structure: Evidence from Russian suffix shift. Cognitive Linguistics . –. Neumann, Ingrid. . Le créole de Beaux Bridge, Louisiane: etudes morphosyntaxique, textes, vocabulaire. Hamburg: Helmut Buske Verlag. Neville, Helen, Janet L. Nicol, Andrew Barss, Kenneth I. Forster, & Merrill F. Garrett. . Syntactically based sentence processing classes: Evidence from event-related brain potentials. Journal of Cognitive Neuroscience . –. Nevins, Andrew I. . Haplological dissimilation at distinct stages of exponence. In Jochen Trommer (ed.), The morphology and phonology of exponence, –. Oxford: Oxford University Press. Newmeyer, Frederick J. . Linguistic theory in America, nd edn, New York: Academic Press. Newmeyer, Frederick J. . Generative linguistics: A historical perspective. London: Routledge. Nichols, Johanna. . Head-marking and dependent-marking grammar. Language (). –. Nichols, Johanna. . Linguistic diversity in space and time. Chicago: The University of Chicago Press. Nichols, Johanna. . Linguistic complexity: A comprehensive definition and survey. In Geoffrey Sampson, David Gil, & Peter Trudgill (eds.), Language complexity as an evolving variable, –. Oxford: Oxford University Press. Nichols, Johanna. . Canonical head marking: Morphology in the relational parts of grammar. Paper presented at Surrey Linguistics Circle, University of Surrey, Guildford,  November . Nichols, Johanna. To appear. Canonical complexity. In Peter Arkadiev & Francesco Gardani (eds.), The complexity of morphology. Nida, Eugene A. . The identification of morphemes. Language . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Nida, Eugene A. . Morphology: The descriptive analysis of words, nd edn. Ann Arbor: University of Michigan Press. Nida, Eugene A. . A system for the description of semantic elements. Word . –. Niepokuj, Mary K. . The historical development of reduplication, with special reference to IndoEuropean. Berkeley: University of California-Berkeley PhD dissertation. Nijhof, Sibylla & Inge Zwitserlood. . Pluralization in sign language of the Netherlands (NGT). In Jan Don & Ted Sanders (eds.), OTS Yearbook –, –. Utrecht: Utrechts Instituut voor Linguistiek OTS. Nikolaeva, Irina. . Unpacking finiteness. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Nikolaeva, Irina & Andrew Spencer. . Nouns as adjectives and adjectives as nouns. Manuscript, SOAS & University of Essex. Nikolaeva, Irina & Andrew Spencer. . Possession and modification: A perspective from Canonical Typology. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Niño, María-Eugenia. . The multiple expression of inflectional information. In Francis Corblin, Daniele Godard, & Jean-Marie Marandin (eds.), Empirical Issues in Formal Syntax and Semantics, –. Berne: Peter Lang. Niswander, Elizabeth, Alexander Pollatsek, & Keith Rayner. . The processing of derived and inflected suffixed words during reading. Language and Cognitive Processes . –. Niswander-Klement, Elizabeth & Alexander Pollatsek. . The effects of root frequency, word frequency, and length on the processing of prefixed English words during reading. Memory & Cognition . –. Noccetti, Sabrina, Anna De Marco, Livia Tonelli, & Wolfgang U. Dressler. . The role of diminutives in the acquisition of Italian morphology. In Ineta Savickienė & Wolfgang U. Dressler (eds.), The acquisition of diminutives. A cross-linguistic perspective, –. Amsterdam/Philadelphia: John Benjamins. Nooteboom, Sieb, Fred Weerman, & Frank Wijnen (eds.). . Storage and computation in the language faculty. Dordrecht: Kluwer. Nordlinger, Rachel. . Constructive case: Evidence from Australian languages. Stanford, CA: CSLI. Nordlinger, Rachel. . Australian case systems: Towards a constructive solution. In Miriam Butt & Tracy Holloway King (eds.), Argument realization, –. Stanford, CA: CSLI. Nordlinger, Rachel & Joan Bresnan. . Nonconfigurational tense in Wambaya. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG. Stanford, CA: CSLI. Nordlinger, Rachel & Joan Bresnan. . Lexical-Functional Grammar: Interactions between morphology and syntax. In Robert D. Borsley & Kersti Börjars (eds.), Non-transformational syntax: Formal and explicit models of grammar, –. Oxford: Wiley-Blackwell. Nordlinger, Rachel & Louisa Sadler. . Tense beyond the verb: Encoding clausal tense/aspect/ mood on nominal dependents. Natural Language and Linguistics Theory . –. Nordlinger, Rachel & Louisa Sadler. . Case stacking in Realizational Morphology. Linguistics (). –. Norris, Dennis. . Models of visual word recognition. Trends in Cognitive Sciences (). –. Noyer, Rolf. . Features, positions and affixes in autonomous morphological structure. Cambridge, MA: MIT PhD dissertation. Noyer, Rolf. . Features, positions and affixes in autonomous morphological structure. New York: Garland. Nübling, Damaris. . How do exceptions arise? On different paths to morphological irregularity. In Horst J. Simon & Heike Wiese (eds.), Expecting the unexpected: Exceptions in grammar, –. Berlin: De Gruyter Mouton. Nyst, Victoria Anna Sophie. . A descriptive analysis of Adamorobe Sign Language (Ghana). Utrecht: LOT. Ó Séaghdha, Diarmuid. . Annotating and learning compound noun semantics. In Proceedings of the ACL- Student Research Workshop, –. Prague: Czech Republic.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Ó Séaghdha, Diarmuid. . Learning compound noun semantics. Cambridge: University of Cambridge PhD dissertation. O’Connor, Rob. . Clitics and phrasal affixation in Constructive Morphology. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG, –. Stanford, CA: CSLI. O’Connor, Rob. . Information structure in Lexical-Functional Grammar: The discourse–prosody correspondence in English and Serbo-Croation. Manchester: University of Manchester PhD dissertation. O’Donnell, Timothy. . Productivity and reuse in language: A theory of linguistic computation and storage. Cambridge, MA: MIT Press. O’Neill, Paul. a. The morphome and morphosyntactic/semantic features. In Silvio Cruschina, Martin Maiden, & John Charles Smith (eds.), The boundaries of pure morphology: Diachronic and synchronic perspectives, –. Oxford: Oxford University Press. O’Neill, Paul. b. The notion of the morphome. In Martin Maiden, John Charles Smith, Maria Goldbach, & Marc-Olivier Hinzelin (eds.), Morphological autonomy: Perspectives from Romance inflectional morphology, –. Oxford: Oxford University Press. O’Neill, Paul. . The morphome in constructive and abstractive theories of morphology. Morphology . –. O’Neill, Paul. . Lexicalism, the principle of morphology-free syntax and the principle of syntaxfree morphology. In Andrew Hippisley & Gregory Stump (eds.), The Cambridge handbook of morphology, –. Cambridge: Cambridge University Press. Oetting, Janna & Janice Horohov. . Past tense marking by children with and without specific language impairment. Journal of Speech, Language, and Hearing Research . –. Ogawa, Yuko. . Vertical scale metaphors in Japanese and Japanese Sign Language. Washington, DC: Gallaudet University MA thesis. Onvlee, Louis. . Kamberaas-Nederlands woordenboek. Dordrecht: Foris. Orie, Olanike-Ola. . Benue-Congo Prosodic phonology and morphology in Optimality Theory. Munich: LINCOM. Östman, Jan-Ola & Mirjam Fried (eds.). . Construction grammars. Cognitive grounding and theoretical extensions. Amsterdam/Philadelphia: John Benjamins. Otero, Carlos. . The dictionary in generative grammar. Manuscript, UCLA. Otoguro, Ryo. . Constructional paradigm in constraint-based morphosyntax: A case of Japanese verb inflection. In Kayla Carpenter, David Oana, Florian Lionnet, Christine Sheil, Tammy Stark, & Vivian Wauters (eds.), Proceedings of the th Annual Meeting of the Berkeley Linguistics Society, –. Berkeley: Berkeley Linguistics Society. Owen, Amanda J. . Factors affecting accuracy of past tense production in children with specific language impairment and their typically-developing peers: The influence of verb transitivity, clause location, and sentence type. Journal of Speech, Language, and Hearing Research . –. Paciaroni, Tania. . Noun inflection classes in Maceratese. In Sascha Gaglia & Marc-Olivier Hinzelin (eds.), Inflection and word-formation in Romance languages, –. Amsterdam/Philadelphia: John Benjamins. Padden, Carol. . Interaction of morphology and syntax in American Sign Language. New York, NY: Garland Publishing. Padden, Carol. . The ASL lexicon. Sign Language & Linguistics . –. Padden, Carol & Darline Clark Gunsauls. . How the alphabet came to be used in a sign language. Sign Language Studies (). –. Padden, Carol, Irit Meir, So-One Hwang, Ryan Lepic, Tory Sampson, & Sharon Seegers. . Patterned iconicity in sign language lexicons. Gesture (). –. Padden, Carol & David Perlmutter. . American Sign Language and the architecture of phonological theory. Natural Language & Linguistic Theory , –. Padó, Sebastian & Mirella Lapata. . Dependency-based construction of semantic space models. Computational Linguistics (). –. Pagy, Fabiane Elias. . Reduplicação na língua brasileira de sinais (LIBRAS). Brasília: Universidade de Brasília MA thesis.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Palancar, Enrique L. . The conjugation classes of Tilapa Otomi: An approach from canonical typology. Linguistics (). –. Palancar, Enrique L. . A mixed system of agreement in the suffix classes of Lealao Chinantec. Morphology (). –. Pantcheva, Marina. . Decomposing path: The nanosyntax of directional expressions. Tromsø: University of Tromsø PhD dissertation. Paster, Mary. . Phonological Conditions on Affixation. Berkeley: University of CaliforniaBerkeley PhD dissertation. Paster, Mary. . Optional plural marking. In Franz Rainer, Wolfgang U. Dressler, Dieter Kastofsky, & Hans Christian Luschützky (eds.), Variation and change in morphology, –. Amsterdam/Philadelphia: John Benjamins. Paster, Mary. . Explaining phonological conditions on affixation: Evidence from suppletive allomorphy and affix ordering. Word Structure (). –. Paterson, Kevin B., Alison Alcock, & Simon P. Liversedge. . Morphological priming during reading: Evidence from eye-movements. Language and Cognitive Processes . –. Paul, Hermann. . Principien der Sprachgeschichte, nd edn. Halle: Niemayer. Paul, Hermann. . Prinzipien der Sprachgeschichte, th edn. Halle: Niemayer. Paul, Peter V. . Language and deafness, th edn. Sudbury, MA: Jones & Bartlett Publishers. Pawley, Andrew & Frances H. Syder. . Two puzzles for linguistic theory: Nativelike selection and nativelike fluency. In Jack Richards & Richard W. Schmidt (eds.), Language and communication, –. London/New York: Longman. Peirce, Charles S. . Collected papers, edited by Charles Hartshorne & Paul Weiss. Cambridge, MA: The Belknap Press of Harvard University Press. Penke, Martina, Helga Weyerts, Matthias Gross, Elke Zander, Thomas F. Münte, & Harald Clahsen. . How the brain processes complex words: An ERP-study of German verb inflections. Cognitive Brain Research . –. Perea, Manuel & Manuel Carreiras. . Do transposed-letter effects occur across lexeme boundaries? Psychonomic Bulletin & Review . –. Pereira, Fernando. . Formal grammar and information theory: Together again? In Bruce E. Nevin & Stephen B. Johnson (eds.), The legacy of Zellig Harris: Language and information into the th century. Vol. : Mathematics and computability of language, –. Amsterdam/Philadelphia: John Benjamins. Perlmutter, David . The split morphology hypothesis: Evidence from Yiddish. In Michael Hammond & Michael Noonan (eds.), Theoretical morphology: Approaches in modern linguistics, –. San Diego, CA: Academic Press. Perniss, Pamela, Robin L. Thompson., & Gabriella Vigliocco. . Iconicity as a general property of language: Evidence from spoken and signed languages. Frontiers in Psychology . doi: ./ fpsyg... Pesetsky, David. . Morphology and logical form. Linguistic Inquiry (). –. Pesetsky, David & Esther Torrego. . T-to-C movement: causes and consequences. In Michael Kenstowicz (ed.), Ken Hale: A life in language, –. Cambridge, MA: MIT Press. Pesetsky, David & Esther Torrego. . The syntax of valuation and the interpretability of features. In Simin Karimi, Vida Samiian, & Wendy K. Wilkins (eds.), Phrasal and clausal architecture: Syntactic derivation and interpretation, –. Amsterdam/Philadelphia: John Benjamins. Pfau, Roland. . Features and categories in language production. Frankfurt: University of Frankfurt PhD Dissertion. Pfau, Roland. . Grammar as processor: a Distributed Morphology account of spontaneous speech errors. Amsterdam/Philadelphia: John Benjamins. Pfau, Roland & Josep Quer. . Nonmanuals: Their grammatical and prosodic roles. In Diane Brentari (ed.), Sign languages, –. Cambridge: Cambridge University Press. Pfau, Roland & Markus Steinbach. . Backward and sideward reduplication in German Sign Language. In Bernhard Hurch (ed.), Studies on reduplication, –. Berlin: De Gruyter Mouton.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Pfau, Roland & Markus Steinbach. . Pluralization in sign and in speech: A cross-modal typological study. Linguistic Typology (). –. Pietrandrea, Paola. . Iconicity and arbitrariness in Italian Sign Language. Sign Language Studies /. –. Pinker, Steven. . Language learnability and language development. Cambridge, MA: Harvard University Press. Pinker, Steven. . Words and rules. Lingua . –. Pinker, Steven. . Words and rules: The ingredients of language. New York: Basic Books. Pinker, Steven & Alan Prince. . On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition (–). –. Pinker, Steven & Alan Prince. . Regular and irregular morphology and the psychological status of rules of grammar. Proceedings of the annual meeting of the Berkeley Linguistics Society , –. Berkeley: Berkeley Linguistics Society. Pinker, Steven & Michael T. Ullman. . The past and future of the past tense. Trends in Cognitive Science . –. Pirrelli, Vito. . Paradigmi in morfologia. Un approccio interdisciplinare alla flessione verbale dell’italiano. Pisa: Istituti Editoriali e Poligrafici Internazionali. Pirrelli, Vito & Marco Battista. . The paradigmatic dimension of stem allomorphy in Italian verb inflection. Italian Journal of Linguistics (). –. Pirrelli, Vito & Marco Battista. . Syntagmatic and paradigmatic issues in computational morphology. Linguistica Computazionale XVIII–XIX. –. Pirrelli, Vito, Marcello Ferro, & Basilio Calderone. . Learning paradigms in time and space. Computational evidence from Romance languages. In Martin Maiden, John Charles Smith, Maria Goldbach, & Marc-Olivier Hinzelin (eds.), Morphological autonomy: Perspectives from Romance inflectional morphology, –. Oxford: Oxford University Press. Pirrelli, Vito, Marcello Ferro, & Claudia Marzi. . Computational complexity of abstractive morphology. In Matthew Baerman, Dunstan Brown, & Greville G. Corbett (eds.), Understanding and measuring morphological complexity, –. Oxford: Oxford University Press. Pizzuto, Elena, Emanuela Cameracanna, Serena Corazza, & Virginia Volterra. . Terms for spatiotemporal relations in Italian Sign Language. In Raffaele Simone (ed.), Iconicity in language, –. Amsterdam/Philadelphia: John Benjamins. Pizzuto, Elena & M. Cristina Caselli. . The acquisition of italian morphology. Implications for models of language development. Journal of Child Language . –. Pizzuto, Elena & Serena Corazza. . Noun morphology in Italian Sign Language (LIS). Lingua . –. Plag, Ingo. . Morphological productivity. Structural constraints in English derivation. Berlin: De Gruyter Mouton. Plag, Ingo. . The nature of derivational morphology in creoles and non-creoles. Journal of Pidgin and Creole Languages (). –. Plag, Ingo. . The role of selectional restrictions, phonotactics and parsing in constraining suffix order in English. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Plag, Ingo (ed.). . Phonology and morphology in Creole languages. Tübingen: Niemeyer. Plag, Ingo. . Syntactic category information and the semantics of derivational morphological rules. Folia Linguistica (–). –. Plag, Ingo. . Morphology in pidgins and creoles. In Keith Brown (ed.), Encyclopedia of Language and Linguistics, vol. , –. Oxford: Elsevier. Plag, Ingo. . Creoles as interlanguages: Inflectional morphology, Journal of Pidgin and Creole Languages (). –. Plag, Ingo & Harald Baayen. . Suffix ordering and morphological processing. Language . –. Plank, Frans. . Paradigm size, morphological typology, and universal economy. Folia Linguistica . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Plank, Frans (ed.). . Paradigms. The economy of inflection. Berlin: De Gruyter Mouton. Plank, Frans. . Inflection and derivation. In Ron E. Asher (ed.), The encyclopedia of language and linguistics, vol. , –. Oxford: Pergamon Press. Plank, Frans. . Split morphology: How agglutination and flexion mix. Linguistic Typology (). –. Plank, Frans. (no date). Introductory notes. Das grammatische Raritätenkabinett, University of Konstanz [http://typo.uni-konstanz.de/rara/intro/index.php?pt=, accessed  January ]. Plaut, David C., James L. McClelland, Mark S. Seidenberg, & Karalyn Patterson. . Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review (). –. Plungian, Vladimir A. . Agglutination and flexion. In Martin Haspelmath, Ekkehard König, Wulf Oesterreicher, & Wolfgang Raible (eds.), Language typology and language universals. An international handbook, vol. , –. Berlin: De Gruyter Mouton. Plungian, Vladimir A. . Vvedenie v grammatičeskuju semantiku: Grammatičeskie značenija i grammatičeskie sistemy jazykov mira [An introduction to grammatical semantics: Grammatical meanings and grammatical systems in the languages of the world]. Moscow: Russian State University for the Humanities [Lithuanian translation: Gramatiniu˛ kategoriju˛ tipologija [Typology of grammatical categories.] vols. I–II. Vilnius: Vilnius University; Academia Salensis, , ]. Plunkett, Kim & Patrick Juola. . A connectionist model of English past tense and plural morphology. Cognitive Science . –. Plunkett, Kim & Virginia Marchman. . From rote learning to system building: Acquiring verb morphology in children and connectionist nets. Cognition . –. Podlesskaya, Vera I. . A corpus-based study of self-repairs in Russian spoken monologues. Russian Linguistics . –. Poizner, Howard, Ursula Bellugi, & Venita Lutes-Driscoll. . Perception of American Sign Language in dynamic point-light displays. Journal of Experimental Psychology: Human Perception and Performance . –. Polinsky, Maria. . Non-canonical agreement is canonical. Transactions of the Philological Society (). –. Special issue on Agreement: A typological perspective edited by Dunstan Brown, Greville G. Corbett, & Carole Tiberius. Pollard, Carl & Ivan Sag. . Information-based syntax and semantics. Stanford, CA: CSLI. Pollard, Carl & Ivan Sag. . Head-Driven Phrase Structure Grammar. Chicago: The Chicago University Press. Post, Brechtje, William Marslen-Wilson, Billi Randall, & Lorraine K. Tyler. . The processing of English regular inflections: Phonological cues to morphological structure, Cognition . –. Postal, Paul M. . Anaphoric islands. In Robert I. Binnick, Alice Davison, Georgia M. Green, & Jerry L. Morgan (eds.), Papers from the Fifth Regional Meeting of the Chicago Linguistic Society, –. Chicago: The University of Chicago. Prasada, Sandeep & Steven Pinker. . Generalization of regular and irregular morphological patterns. Language and Cognitive Processes . –. Preminger, Omer. . Breaking agreements: Distinguishing agreement and clitic-doubling by their failures. Linguistic Inquiry (). –. Prévost, Philippe & Lydia White (). Missing surface inflection or impairment in second language acquisition? Evidence from tense and agreement. Second Language Research (). –. Prillwitz, Siegmund & Regina Leven. . Skizzen zu einer Grammatik der deutschen Gebärdensprache. Hamburg: Forschungsstelle Deutsche Gebärdensprache. Prince, Alan S. & Paul Smolensky. . Optimality Theory: constraint interaction in Generative Grammar. Malden, MA: Blackwell. Pulleyblank, Douglas. . Patterns of reduplication in Yoruba. In Kristin Hanson & Sharon Inkelas (eds.), The nature of the word: Essays in honor of Paul Kiparsky, –. Cambridge, MA: MIT Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Pullum, Geoffrey. . The central question in comparative syntactic metatheory. Mind and Language /. –. Pullum, Geoffrey & Arnold M. Zwicky. . Phonological resolution of syntactic feature conflict. Language . –. Pümpel-Mader, Maria, Elsbeth Gassner-Koch, Hans Wellmann, & Lorelies Ortner. . Deutsche Wortbildung. Typen und Tendenzen in der Gegenwartssprache: Fünfter Hauptteil: Adjektivkomposita und Partizipialbildungen. (Sprache der Gegenwart ). Berlin: De Gruyter Mouton. Punske, Jeffrey. . Cyclicity versus movement: English nominalization and syntactic approaches to morpho-phonological regularity. Canadian Journal of Linguistics (). –. Pustejovsky, James. . The syntax of event structure. Cognition . –. Pustejovksy, James. . The Generative Lexicon. Cambridge, MA: MIT Press. Putnam, Michael T. & Antonio Fábregas. . On the need for formal features in the narrow syntax. In Peter Kosta, Steven L. Franks, Teodora Radeva-Bork, & Lilia Schürcks (eds.), Minimalism and beyond, –. Amsterdam/Philadelphia: John Benjamins. Pylkkänen, Liina. . Introducing arguments. Cambridge, MA: MIT PhD dissertation. Rácz-Engelhardt, Szilárd. . Morphological properties of mouthing in Hungarian Sign Language, contact phenomena between a sign language and a Finno-Ugric spoken language. Paper presented at TISLR , London, – July . Radden, Günther & Klaus-Uwe Panther (eds.). . Studies in linguistic motivation. Berlin: De Gruyter Mouton. Radford, Andrew. . Syntactic theory and the acquisition of English syntax. Oxford: Blackwell. Radkevich, Nina. . On location: The structure of case and adpositions. Storrs: University of Connecticut PhD dissertation. Rainer, Franz. . Towards a theory of blocking: The case of Italian and German quality nouns. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Rainer, Franz. . I nomi di qualità nell’italiano contemporaneo. Vienna: Braumüller. Rainer, Franz. . Morphological metaphysics: Virtual, potential and actual words. Word Structure (). –. Rainer, Franz, Wolfgang U. Dressler, Francesco Gardani, & Hans Christian Luschützky (eds.). . Morphology and meaning. Amsterdam/Philadelphia: John Benjamins. Rainer, Franz, Wolfgang U. Dressler, Dieter Kastofsky, & Hans Christian Luschützky. . Editor’s introduction. In Franz Rainer, Wolfgang U. Dressler, Dieter Kastofsky, & Hans Christian Luschützky (eds.), Variation and change in morphology, –. Amsterdam/Philadelphia: John Benjamins. Ralli, Angela. . Morfologia [Morphology]. Athens: Patakis. Ralli, Angela. . On the role of allomorphy in inflectional morphology: Evidence from dialectal variation. In Giandomenico Sica (ed.), Open problems in linguistics and lexicography, –. Milano: Polimetrica. Ralli, Angela. . Morphology meets dialectology: Insights from Modern Greek dialects. Morphology (). –. Ralli, Angela. a. Verbal loanblend formation in Asia Minor Greek (Aivaliot). In Martine Vanhove, Thomas Stolz, Aina Urdze, & Htomi Otsuka (eds.), Morphologies in contact. Studia Typologica (STUF), –. Berlin: Akademie Verlag. Ralli, Angela. b. Verbal loanblends in Griko and Heptanesian: A case study of contact morphology. L’Italia Dialettale LXXIII. –. Ralli, Angela. a. Compounding in Modern Greek. Dordrecht: Springer. Ralli, Angela. b. Compounding and its locus of realization: Evidence from Greek and Turkish. Word Structure (). –. Ralli, Angela. . Strategies and patterns of loan verb integration in Modern Greek varieties. In Angela Ralli (ed.), Contact morphology in Modern Greek dialects, –. Newcastle upon Tyne: Cambridge Scholars Publishing.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Ralli, Angela. . Dictionary of the dialectal varieties of Kydonies, Moschonisia and North-Eastern Lesbos. Athens: Foundation of Historical Studies. Ralli Angela & Marios Andreou. . Revisiting exocentricity in compounding: Evidence from Greek and Cypriot. In Ferenc Kiefer, Mária Ladányi, & Péter Siptár (eds.), Current issues in morphological theory, –. Amsterdam/Philadelphia: John Benjamins. Ralli, Angela, Dimitra Melissaropoulou, & Thanasis Tsiamas. . Restructuring in the nominal paradigm of Aivaliot and Moschonisiot. Studies in Greek Linguistics . –. Ramchand, Gillian. . Verb meaning and the lexicon: A first phase syntax. Cambridge: Cambridge University Press. Ramchand, Gillian & Peter Svenonius. . Deriving the Functional Hierarchy. Language Sciences . –. Ramscar, Michael & Melody Dye. . Learning language from the input: Why innate constraints can’t explain noun compounding. Cognitive Psychology . –. Ramscar, Michael, Melody Dye, James P. Blevins, & Harald Baayen. . Morphological development. In Amalia Bar-On & Dorit Ravid (eds.), Handbook of communications disorders: Theoretical, empirical, and applied linguistic perspectives. Berlin: De Gruyter Mouton. Ramscar, Michael & Daniel Yarlett. . Linguistic self-correction in the absence of feedback: A new approach to the logical problem of language acquisition. Cognitive Science . –. Rappaport Hovav, Malka & Beth Levin. . Two types of derived accomplishments. In Miriam Butt & Tracy King (eds.), Proceedings of the First LFG Conference, –. Grenoble: RANK Xerox. Rappaport Hovav, Malka & Beth Levin. . Building verb meaning. In Miriam Butt & Wilhelm Geuder (eds.), The projection of arguments: Lexical and compositional factors, –. Stanford, CA: CSLI. Rastle, Kathleen & Matthew H. Davis. . Morphological decomposition based on the analysis of orthography. Language and Cognitive Processes . –. Rastle, Kathleen, Matthew H. Davis, William D. Marslen-Wilson, & Lorraine K. Tyler. . Morphological and semantic effects in visual word recognition: A time-course study. Language and Cognitive Processes . –. Rastle, Kathleen, Matthew H. Davis, & Boris New. . The broth in my brother’s brothel: Morphoorthographic segmentation in visual word recognition. Psychonomic Bulletin & Review (). –. Rastle, Kathleen, Lorraine K. Tyler, & William Marslen-Wilson. . New evidence for morphological errors in deep dyslexia. Brain and Language . –. Rathmann, Christian. . Event structure in American Sign Language. Austin: University of Texas PhD dissertation [http://www.doc.com/p-.html, accessed  May ]. Rathmann, Christian & Gaurav Mathur. . Is verb agreement the same cross-modally? In Richard P. Meier, Kearsy Cormier, & David Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, –. Cambridge: Cambridge University Press. Ratnaparkhi, A. . Maximum entropy models for natural language ambiguity resolution. Philadelphia: University of Pennsylvania PhD dissertation. Raveh, Michal. . The contribution of frequency and semantic similarity to morphological processing. Brain and Language . –. Reape, Mike. . A formal theory of word order: A case study in West Germanic. Edinburgh: Edinburgh University PhD dissertation. Reape, Mike. . Domain union and word order variation in German. In John Nerbonne, Klaus Netter, & Carl Pollard (eds.), German in Head-Driven Phrase Structure Grammar, –. Stanford, CA: CSLI. Reape, Mike. . Getting things in order. In Harry Bunt & Arthur van Horck (eds.), Discontinuous constituency, –. Berlin: De Gruyter Mouton. Rescola, Robert A. & Allen R. Wagner. . A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In Abraham H. Black & William F. Prokasy (eds.), Classical conditioning II, –. New York: Appleton-Century-Crofts.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Reuse, Willem J. de. . Polysynthesis as a typological feature. An attempt at a characterization from Eskimo and Athabaskan perspectives. In Marc-Antoine Mahieu & Nicole Tersis (eds.), Variations on polysynthesis: The Eskaleut languages, –. Amsterdam/Philadelphia: John Benjamins. Ricca, Davide. . Sintassi e semantica degli avverbi in -mente. In Maria Grossmann & Franz Rainer (eds.), La formazione delle parole in italiano, –. Tübingen: Niemeyer. Rice, Keren. . On the placement of inflection. Linguistic Inquiry . –. Rice, Keren. . Morpheme order and semantic scope: Word formation in the Athapaskan verb. Cambridge: Cambridge University Press. Rice, Keren. . Principles of affix ordering: an overview. Word Structure (). –. Richards, Ivor A. . The philosophy of rhetoric. Oxford: Oxford University Press. Richards, Norvin. . Uttering trees. Cambridge, MA: MIT Press. Richardson, John F., Mitchell Marks, & Amy Chukerman (eds.). . Papers from the Parasession on the Interplay of phonology, morphology and syntax. Chicago: Chicago Linguistic Society. Riddle, Elizabeth. . A historical perspective on the productivity of the suffixes ‑ness and ‑ity. In Jacek Fisiak (ed.), Historical semantics - historical word-formation, –. Berlin: De Gruyter Mouton. Riehemann, Susanne Z. . Type-based derivational morphology. The Journal of Comparative Germanic Linguistics (). –. Riehemann, Susanne Z. . A constructional approach to idioms and word formation. Stanford: Stanford University PhD dissertation. Rissanen, Jorma. . Stochastic complexity in statistical inquiry. Singapore: World Scientific. Rizzi, Luigi. . Relativized minimality. Cambridge, MA: MIT Press. Rizzi, Luigi. . Some notes on linguistic theory and language development: The case of root infinitives. Language Acquisition . –. Roark, Brian & Richard Sproat. . Computational approaches to morphology and syntax. Oxford: Oxford University Press. Robins, Robert H. . In defence of WP. Transactions of the Philological Society . –. Roché, Michel & Marc Plénat. . Le jeu des contraintes dans la sélection du thème présuffixal. In Franck Neveu, Peter Blumenthal, Linda Hriba, Annette Gerstenberg, Judith Meinschaefer, & Sophie Prévost (eds.), e Congrès Mondial de Linguistique Française. Berlin, – juillet , –. Paris: Institut de Linguistique Française. Rochet, Bernard. . Perception and production of second-language speech sounds by adults. In Winifred Strange (ed.), Speech perception and linguistic experience: Issues in cross language research. Timonium, MD: York Press. Rodero Takahira, Aline Garcia & Rafael Dias Minussi. . Word formation patterns in Brazilian Sign Language. Abstract of paper presented at RALFe: Rencontres d’Automne de Linguistique formelle: Langage, Langues et Cognition, Université Paris , – November . Rodriguez-Fornells, Antoni, Harald Clahsen, Conxita Lleó, Wanda Zaake, & Thomas F. Münte (). Event-related brain responses to morphological violations in Catalan. Cognitive Brain Research . –. Rodriguez-Fornells, Antoni, Thomas F. Münte, & Harald Clahsen. . Morphological priming in Spanish verb forms: An ERP repetition priming study. Journal of Cognitive Neuroscience . –. Roelofs, Ardi. . Serial order in planning the production of successive morphemes of a word. Journal of Memory and Language . –. Roelofs, Ardi & Harald Baayen. . Morphology by itself in planning the production of spoken words. Psychonomic Bulletin & Review . –. Roeper, Thomas. . Compound syntax and head movement. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Kluwer. Roeper, Thomas. . Chomsky’s Remarks and the transformationalist hypothesis. In Pavol Štekauer & Rochelle Lieber (eds.), Handbook of word-formation, –. Dordrecht: Springer. Roeper, Thomas & Muffy Siegel. . A lexical transformation for verbal compounds. Linguistic Inquiry . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Rohlfs, Gerard. . Studi e ricerche su lingue e dialetti d’Italia. Milano: Sansoni (st edn. Firenze ). Romaine, Suzanne. . On the productivity of word-formation rules and limits of variability in the lexicon. Australian Journal of Linguistics . –. Romaine, Suzanne. . Change in productivity. In Geert Booij, Christian Lehmann, Joachim Mugdan, & Stavros Skopeteas (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton. Ronneberger-Sibold, Elke. . Word creation: Definition-function-typology. In Franz Rainer, Wolfgang U. Dressler, Dieter Kastovsky, & Hans Ch. Luschützky (eds.), Variation and change in morphology: Selected papers from the th International Morphology Meeting, Vienna, February , –. Amsterdam/Philadelphia: John Benjamins. Rose, Sharon & Rachel Walker. . A typology of consonant agreement as correspondence. Language . –. Rosta, Andrew. . English syntax and Word Grammar theory. London: University College London PhD dissertation. Rottet, Kevin. . Functional categories and verb raising in Louisiana Creole. Probus . –. Round, Erich R. . Kayardild morphology and syntax. Oxford: Oxford University Press. Round, Erich R. & Greville G. Corbett. . The theory of feature systems: One feature versus two for Kayardild tense-aspect-mood. Morphology (). –. Rousseau, Jean. . La classification des langues au début du XIXe sciècle. In Sylvian Auroux, E. F. K. Koerner, Hans-Josef Niederehe, & Kees Versteegh (eds.), History of the language sciences: An international handbook on the evolution of the study of language from the beginnings to the present, vol. , –. Berlin: De Gruyter Mouton. Rubach, Jerzy. . Cyclic and lexical phonology: The structure of Polish. Dordrecht: Foris. Rubba, Johanna E. . Discontinuous morphology in Modern Aramaic. San Diego: University of California-San Diego PhD dissertation. Rubenstein, Herbert & Irwin Pollack. . Word predictability and intelligibility. Journal of Verbal Learning and Verbal Behavior –. Rubino, Carl. . Reduplication. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World atlas of language structures, –. Oxford: Oxford University Press. Rueckl, Jay G. & Karen A. Aicher. . Are CORNER and BROTHER morphologically complex? Not in the long term. Language and Cognitive Processes (–). –. Rueckl, Jay G. & Anurag Rimzhim. . On the interaction of letter transpositions and morphemic boundaries. Language and Cognitive Processes . –. Rueter, Jack. . Adnominal person in the morphological system of Erzya (Suomalais-Ugrilaisen Seuran Toimituksia ). Helsinki: Suomalais-Ugrilainen Seura. Rumelhart, David E. & James L. McClelland. . On learning the past tense of English verbs. In David E. Rumelhart & James L. McClelland (eds.), Parallel distributed processing. Volume : Psychological and biological models, –. Cambridge, MA: MIT Press. Ruppenhofer, Josef & Laura Michaelis. . A constructional account of genre-based argument omissions. Constructions and Frames (). –. Russell, Kevin. . Optimality theory and morphology. In Diana Archangeli & D. Terence Langendoen (eds.), Optimality theory: An overview, –. Malden, MA: Blackwell. Russo, Tommaso. . Iconicity and productivity in sign language discourse: An analysis of three LIS discourse registers. Sign Language Studies (). –. Russo, Tommaso, Rosaria Giuranna, & Elena Pizzuto. . Italian Sign Language (LIS) poetry: Iconic properties and structural regularities. Sign Language Studies (). –. Saarinen, Pauliina & Jennifer Hay. . Affix ordering in derivation. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Saba Kirchner, Jesse. . Minimal reduplication. Santa Cruz: University of California-Santa Cruz PhD dissertation. Saba Kirchner, Jesse. . Minimal reduplication and reduplicative exponence. Morphology . –. Sadler, Louisa. . Clitics and the structure-function mapping. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG. Stanford, CA: CSLI. Sadler, Louisa. . English auxiliaries as tense inflections. Special Issue of Essex Research Reports Produced on the Occasion of the Retirement of Keith Brown. Sadler, Louisa & Rachel Nordlinger. . Case stacking in realizational morphology. Linguistics . –. Sadler, Louisa & Andrew Spencer. . Syntax as an exponent of morphological features. In Geert Booij & Jaap Van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Sadler, Louisa & Andrew Spencer (eds.). . Projecting morphology. Stanford, CA: CSLI. Sadock, Jerrold M. . Noun incorporation in Greenlandic: A case of syntactic word-formation. Language . –. Sadock, Jerrold M. . Some notes on noun incorporation. Language . –. Sadock, Jerrold M. . Autolexical syntax: A theory of parallel grammatical representations. Chicago: The University of Chicago Press. Sag, Ivan. . Taking performance seriously. In Carlos Martin-Vide (ed.), Lenguajes naturales y lenguajes formales. Actas del VII Congreso de lenguajes naturales y lenguajes formales (VicBarcelona. – de septiembre de ), –. Barcelona: Promociones y Publicaciones Universitarias. Sag, Ivan. . English relative clause constructions. Journal of Linguistics (). –. Sag, Ivan. . Sign-Based Construction Grammar: An informal synopsis. In Hans Boas & Ivan Sag (eds.), Sign-Based Construction Grammar, –. Stanford, CA: CSLI. Sag, Ivan, Thomas Wasow, & Emily Bender. . Syntactic theory: A formal introduction. Stanford, CA: CSLI. Sagot, Benoît & Géraldine Walther. . Non-canonical inflection: Data, formalisation and complexity measures. In Cerstin Mahlow & Michael Piotrowski (eds.), SFCM : Communications in computer and information science, vol. , –. Berlin: Springer. Sahlgren, Magnus. . The Word Space Model. Stockholm: Stockholm University PhD dissertation. Salvioni, Carlo. . Fonetica e morfologia del dialetto milanese. L’Italia Dialettale . –. Samuels, Bridget D. . Phonological architecture. A biolinguistic perspective. Oxford: Oxford University Press. Samvelian, Pollet & Jesse Tseng. . Persian object clitics and the syntax–morphology interface. In Stefan Müller (ed.), Proceedings of the th International Conference on Head-Driven Phrase Structure Grammar, –. Stanford, CA: CSLI. Sanders, Nathan & Donna Jo Napoli. a. Reactive effort as a factor that shapes sign language lexicons. Language (). –. Sanders, Nathan & Donna Jo Napoli. b. A cross-linguistic preference for torso stability in the lexicon: Evidence from  sign languages. Sign Language & Linguistics (). –. Sandler, Wendy. . Phonological representation of the sign: Linearity and nonlinearity in American Sign Language. Berlin: De Gruyter Mouton. Sandler, Wendy. . Temporal aspects and ASL phonology. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in sign language research, Volume : Linguistics, –. Chicago: University of Chicago Press. Sandler, Wendy. . Cliticization and prosodic words in a sign language. In Tracy Hall & Ursula Kleinhenz (eds.), Studies on the phonological word, –. Amsterdam/Philadelphia: John Benjamins. Sandler, Wendy. . The phonological organization of sign languages. Language and Linguistics Compass (). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Sandler, Wendy & Diane Lillo-Martin. . Sign language and linguistic universals. Cambridge: Cambridge University Press. Sandra, Dominiek. . On the representation and processing of compound words: Automatic access to constituent morphemes does not occur. The Quarterly Journal of Experimental Psychology A. –. Sandra, Dominiek. . The morphology of the mental lexicon: Internal word structure viewed from a psycholinguistic perspective. Language and Cognitive Processes . –. Sandra, Dominiek & Michel Fayol. . Spelling errors as another way into the mental lexicon. In Harald Baayen & Robert Schreuder (eds.), Morphological structure in language processing, –. Berlin: De Gruyter Mouton. Sano, Tetsuya & Nina Hyams. . Agreement, finiteness, and the development of null argument. In Proceedings of NELS . –. Sapir, Edward. . Language: An introduction to the study of speech. New York: Harcourt, Brace & World. Sapir, Edward. . Sound patterns in language. Language . –. Sapir, Edward. . The status of linguistics as a science. Language . –. Sapir, Edward.  []. Language: An introduction to the study of speech. New York: Harcourt, Brace and World. Saussure, Ferdinand de. . Cours de linguistique générale. Paris: Payot. Saussure, Ferdinand de.  []. Course in general linguistics. English translation by Wade Baskin. New York: Philosophical Library. Saussure, Ferdinand de.  []. Cours de linguistique générale. Critical edition prepared by Tullio de Mauro. Paris: Payot. Saussure, Ferdinand de.  []. Course in general linguistics. English translation and editorial matter by Roy Harris. LaSalle, IL: Open Court. Saussure, René de. . Principes logiques de la formation de mots. Geneva: Librairie Kündig. Saussure, René de. . La structure logique des mots dans les langues naturelles, considérées au point de vue de son application aux langues artificielles. Berne: Librairie A. Lefilleul. Savini, Marina. . Phrasal compounds in Afrikaans. Stellenbosch, Republic of South Africa: University of Stellenbosch MA thesis. Say, Sergey. . Antipassive -sja verbs in Russian: Between inflection and derivation. In Wolfgang U. Dressler, Dieter Kastovsky, Oskar Pfeiffer, & Franz Rainer (eds.), Morphology and its demarcations, –. Amsterdam/Philadelphia: John Benjamins. Scalise, Sergio. . Morfologia lessicale. Padova: Clesp. Scalise, Sergio. . Generative morphology. Dordrecht: Foris. Scalise, Sergio. . Inflection and derivation. Linguistics (). –. Scalise, Sergio. . Constraints on the Italian suffix -mente. In Wolfgang U. Dressler, Hans C. Luschützky, Oskar E. Pfeiffer, & John R. Rennison (eds.), Contemporary morphology, –. Berlin: De Gruyter Mouton. Scalise, Sergio. . Compounding in Italian. Rivista di Linguistica (). –. Scalise, Sergio & Antonietta Bisetto. . The classification of compounds. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of compounding, –. Oxford: Oxford University Press. Scalise, Sergio & Antonio Fábregas. . The head in compounding. In Sergio Scalise & Irene Vogel (eds.), Cross-disciplinary issues in compounding, –. Amsterdam/Philadelphia: John Benjamins. Scalise, Sergio & Emiliano Guevara. . The lexicalist approach to word-formation and the notion of the lexicon. In Pavol Štekauer & Rochelle Lieber (eds.), Handbook of word-formation, –. Dordrecht: Springer. Scalise, Sergio & Emiliano Guevara. . Exocentric compounding in a typological framework. Lingue e Linguaggio V(). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Scarborough, Don L., Charles Cortese, & Hollis S. Scarborough. . Frequency and repetition effects in lexical memory. Journal of Experimental Psychology: Human Perception and Performance . –. Schembri, Adam. . Rethinking ‘classifiers’ in signed languages. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, –. New York: Psychology Press. Schembri, Adam & Trevor A. Johnston. . Sociolinguistic variation in the use of fingerspelling in Australian Sign Language: A pilot study. Sign Language Studies (). –. Schembri, Adam, Caroline Jones, & Denis Burnham. . Comparing action gestures and classifier verbs of motion: Evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education (). –. Scherer, Carmen. . Wortbildungswandel und Produktivität. Eine empirische Studie zur nominalen ‘-er’-Derivation im Deutschen. Tübingen: Niemeyer. Scherer, Carmen. . Was ist Wortbildungswandel? Linguistische Berichte . –. Schiller, Niels O. . Review of ‘Neurolinguistics. An Introduction to Spoken Language Processing and its Disorders’, John Ingram. Cambridge University Press, Cambridge (Cambridge Textbooks in Linguistics) (). xxi +  pp., ISBN ---- (pb). Lingua . –. Schiller, Niels O. & Rinus G. Verdonschot. . Accessing words from the mental lexicon. In John R. Taylor (ed.), The Oxford handbook of the word, –. Oxford: Oxford University Press. Schmidt, Günter Dietrich. . Das Affixoid. Zur Notwendigkeit und Brauchbarkeit eines beliebten Zwischenbegriffs der Wortbildung. In Gabriele Hoppe, Alan Kirkness, Elisabeth Link, Isolde Nortmeyer, Wolfgang Rettig, & Günter Dietrich Schmidt (eds.), Deutsche Lehnwortbildung. Beiträge zur Erforschung der Wortbildung mit entlehnten WB-Einheiten im Deutschen, –. Tübingen: Gunter Narr. Schmidt, Richard. . The role of consciousness in second language learning. Applied Linguistics . –. Schreuder, Robert & Harald Baayen. . Modeling morphological processing. In Laurie Beth Feldman (ed.), Morphological aspects of language processing, –. Hillsdale, NJ: Erlbaum. Schreuder, Robert & Harald Baayen. . How complex simplex words can be. Journal of Memory and Language . –. Schreuder, Robert, Cristina Burani, & Harald Baayen. . Parsing and semantic opacity. In Egbert Assink & Dominiek Sandra (eds.), Reading complex words: Cross-language studies, –. Dordrecht: Kluwer. Schütze, Carson & Kenneth Wexler. . Subject case licensing and English root infinitives. In Andy Stringfellow, Dalia Cahana-Amitay, Elizabeth Hughes, & Andrea Zukowski (eds.), Proceedings of the th Annual Boston University Conference on Language Development, –. Somerville, MA: Cascadilla Press. Schwager, Waldemar & Ulrike Zeshan. . Word classes in sign languages criteria and classifications. Studies in Language (). –. Schwartz, Bonnie D. & Rex A. Sprouse. . L cognitive states and the Full Transfer/Full Access model. Second Language Research (). –. Schwartz, Myrna F. . What the classical aphasia categories can’t do for us, and why. Brain and Language . –. Seidenberg, Mark S. & Laura M. Gonnerman. . Explaining derivational morphology as the convergence of codes. Trends in Cognitive Science (). –. Seifart, Frank. . The structure and use of shape-based noun classes in Miraña (North West Amazon). Nijmegen: Radboud University PhD dissertation. Seiss, Melanie. . Implementing the morphology–syntax interface: Challenges from MurrinhPatha verbs. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG , –. Stanford, CA: CSLI.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Seiss, Melanie & Rachel Nordlinger. . Applicativizing complex predicates: A case study from Murrinh-Patha. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG , –. Stanford, CA: CSLI. Selkirk, Elisabeth O. . The syntax of words. Cambridge, MA: MIT Press. Selkirk, Elisabeth. . On derived domains in sentence phonology. Phonology Yearbook . –. Sells, Peter. . Syntactic information and its morphological expression. In Louisa Sadler & Andrew Spencer (ed.), Projecting morphology, –. Stanford, CA: CSLI. Semenza, Carlo, Claudio Luzzatti, & Simona Carabelli. . Morphological representation of compound nouns: A study on Italian aphasic patients. Journal of Neurolinguistics . –. Sereno, Joan A. & Allard Jongman. . Processing of English inflectional morphology. Memory and Cognition . –. Seuren, Pieter & Herman Wekker. . Semantic transparency as a factor in creole genesis. In Peter Muysken & Norval Smith (eds.), Substrata versus universals in creole genesis, –. Amsterdam/ Philadelphia: John Benjamins. Seyfarth, Scott, Farrell Ackermanmm, & Robert Malouf. . Implicative organization facilitates morphological learning. In Herman Leung, Zachary O’Hagan, Sarah Bakst, Auburn Lutzross, Jonathan Manker, Nicholas Rolle, & Katie Sardinha (eds.), Proceedings of the th Annual Meeting of the Berkeley Linguistics Society, –. Berkeley: Berkeley Linguistics Society. Shannon, Claude. . A mathematical theory of communication. The Bell System Technical Journal . –, –. Sharma, Devyani. . Nominal clitics and constructive morphology in Hindi. In Miriam Butt & Tracy Holloway King (eds.), Proceedings of LFG. Stanford, CA: CSLI. Shoben, Edward J. . Predicating and nonpredicating combinations. In Paula Schwanenflugal (ed.), The psychology of word meanings. Hillsdale, NJ: Erlbaum. Shoolman, Natalie & Sally Andrews. . Racehorses, reindeer, and sparrows: Using masked priming to investigate morphological influences on compound word identification. In Sachiko Kinoshita & Stephen J. Lupker (eds.), Masked priming: The state of the art, –. New York: Psychology Press. Siddiqi, Daniel. . Syntax within the word: Economy, allomorphy, and argument selection in Distributed Morphology. Amsterdam/Philadelphia: John Benjamins. Siddiqi, Daniel. . Distributed Morphology. Language and Linguistics Compass (). –. Siegel, Dorothy. . Topics in English morphology. Cambridge, MA: MIT PhD dissertation. Siegel, Dorothy. . Topics in English morphology. New York: Garland. Siegel, Jeff. a. Morphological simplicity in pidgins and creoles. Journal of Pidgin and Creole Languages (). –. Siegel, Jeff. b. Morphological elaboration. Journal of Pidgin and Creole Languages (). –. Siegel, Jeff. . The emergence of pidgin and creole languages. Oxford: Oxford University Press. Siegel, Jeff. . The role of substrate transfer in the development of grammatical morphology in language contact varieties. Word Structure (). –. Siewierska, Anna & Dik Bakker. . Passive agents: Prototypical vs. canonical passives. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Sigurdsson, Halldór Ármann. . Case: abstract vs. morphological. In Ellen Brandner & Heike Zinzmeister (eds.), New perspectives on case theory, –. Stanford: CSLI. Sigurdsson, Halldór Ármann. . Meaningful silence, meaningless sounds. Linguistic Variation Yearbook . –. Sigurdsson, Halldór Ármann. . The nominative puzzle and the low nominative hypothesis. Linguistic Inquiry . –. Silva, Renita & Harald Clahsen. . Morphologically complex words in L and L processing: Evidence from masked priming experiments in English. Bilingualism: Language and Cognition . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Simonsen, Hanne Gram. . Past tense acquisition in Norwegian: Experimental evidence. In Hanne Gram Simonsen & Rolf Theil Endresen (eds.), A cognitive approach to the verb: Morphological and constructional perspectives, –. Berlin: De Gruyter Mouton. Simpson, Jane. . Aspects of Warlpiri morphology and syntax. Cambridge, MA: MIT PhD dissertation. Simpson, Jane. . Warlpiri morpho-syntax: A lexicalist approach. Dordrecht: Kluwer. Simpson, Jane & Margaret Withgott. . Pronominal clitic clusters and templates. In Hagit Borer (ed.), Syntax and semantics : The syntax of pronominal clitics, –. New York: Academic Press. Sims, Andrea D. . Inflectional defectiveness. Cambridge: Cambridge University Press. Sinclair, Hermine. . Comparative linguistics and language acquisition. In Ramón Arzápalo Marín & Yolanda Lastra (eds.), Vitalidad et influencia de las lenguas indigenas en Latinoamérica. II Colloquio Mauricio Swadesh, –. Mexico: Universidad Nacional Autonoma de Mexico, Instituto de Investigaciones Antropologicas. Skalička, Vladimír. . Typologische Studien. Braunschweig: Viehweg. Skant, Andrea, Franz Dotter, Elisabeth Bergmeister, Marlene Hilzensauer, Manuela Hobel, Klaudia Krammer, Ingeborg Okorn, Christian Orasche, Reinhold Orter, & Natalie Unterberger. . Grammmatik der Österreichischen Gebärdensprache. Klagenfurt: Forschungszentrum für Gebärdensprache und Hörgeschädigtenkommunikation. Slabakova, Roumyana. . Telicity in the second language. Amsterdam/Philadephia: John Benjamins. Slobin, Dan. . The crosslinguistic study of language acquisition: Volumes  & . Hillsdale, NJ: Erlbaum. Slobin, Dan. . The crosslinguistic study of language acquisition: Volume . Hillsdale, NJ: Erlbaum. Slobin, Dan. . The crosslinguistic study of language acquisition: Volumes  & . Hillsdale, NJ: Erlbaum. Smeets, Rieks. . On valencies, actants and actant coding in Circassian. In B. George Hewitt (ed.), Caucasian perspectives, –. Munich: LINCOM. Smith, Neil V. . The Nupe verb. African Language Studies X. –. Smolka, Eva, Matthias Gondan, & Frank Rösler. . Take a stand on understanding: Electrophysiological evidence for stem access in German complex verbs. Frontiers in Human Neuroscience . . Smolka, Eva, Patrick H. Khader, Richard Wiese, Pienie Zwitserlood, & Frank Rösler. . Electrophysiological evidence for the continuous processing of linguistic categories of regular and irregular verb inflection in German. Journal of Cognitive Neuroscience . –. Song, Jae Y., Megha Sundara, & Katherine Demuth. . Phonological constraints on children’s production of English rd person singular ‑s. Journal of Speech, Language, and Hearing Research (). –. Sorace, Antonella & Francesca Filiaci. . Anaphora resolution in near-native speakers of Italian. Second Language Research (). –. Soukka, Maria. . A descriptive grammar of Noon: A Cangin language of Senegal. Munich: LINCOM. Spencer, Andrew. . Bracketing paradoxes and the English lexicon. Language (). –. Spencer, Andrew. . Morphological theory. An introduction to word structure in Generative Grammar. Oxford: Blackwell. Spencer, Andrew. . Incorporation in Chukchi. Language . –. Spencer, Andrew. . Morphophonological operations. In Andrew Spencer & Arnold M. Zwicky (eds.), The handbook of morphology, –. Oxford: Blackwell. Spencer, Andrew. a. Morphology and syntax. In Geert Booij, Christian Lehmann, & Joachim Mudgan (eds.), Morphologie/Morphology. Ein internationales Handbuch zur Flexion und Wortbildung/An international handbook on inflection and word-formation, vol. , –. Berlin: De Gruyter Mouton.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Spencer, Andrew. b. Verbal clitics in Bulgarian: A Paradigm Function approach. In Birgit Gerlach & Janet Grijzenhout (eds.), Clitics in phonology, morphology and syntax, –. Amsterdam/Philadelphia: John Benjamins. Spencer, Andrew. . Periphrastic paradigms in Bulgarian. In Uwe Junghanns & Luka Szucsich (eds.), Syntactic structures and morphological information, –. Berlin: De Gruyter Mouton. Spencer, Andrew. a. Extending deponency. In Matthew Baerman, Greville G. Corbett, Dunstan Brown, & Andrew Hippisley (eds.), Deponency and morphological mismatches, –. Oxford: British Academy and Oxford University Press. Spencer, Andrew. b. Generalized Paradigm Function Morphology—A synopsis. In Alexandra Galani & Beck Sinar (eds.), Papers from the York-Essex Morphology Meeting, —York Papers in Linguistics .. –. Spencer, Andrew. c. Inflecting clitics in Generalized Paradigm Function Morphology. Lingue e Linguaggio IV(). –. Spencer, Andrew. d. Word-formation and syntax. In Pavol Štekauer & Rochelle Lieber (eds.), Handbook of word-formation, –. Dordrecht: Springer. Spencer, Andrew. . Factorizing lexical relatedness. In Susan Olsen (ed.), New impulses in wordformation, –. Hamburg: Buske. Spencer, Andrew. . Identifying stems. Word Structure (). –. Spencer, Andrew. a. Lexical relatedness: A paradigm-based model. Oxford: Oxford University Press. Spencer, Andrew. b. Selkup denominal adjectives: A generalized paradigm function analysis. In Nabil Hathout, Fabio Montermini, & Jesse Tseng (eds.), Morphology in Toulouse: Selected proceedings of Décembrettes  (Toulouse, – December ), –. Munich: LINCOM. Spencer, Andrew. . Morphology: An overview of central concepts. In Louisa Sadler & Andrew Spencer (eds.), Projecting morphology. Stanford, CA: CSLI. Spencer, Andrew. . Stems, the morphome, and meaning-bearing inflection. In Ana R. Luís & Ricardo Bermúdez-Otero (eds.), The morphome debate, –. Oxford: Oxford University Press. Spencer, Andrew & Ana Luís. a. Clitics. An introduction. Cambridge: Cambridge University Press. Spencer, Andrew & Ana Luís. b. The canonical clitic. In Dunstan Brown, Marina Chumakina, & Greville G. Corbett (eds.), Canonical morphology and syntax, –. Oxford: Oxford University Press. Spencer, Andrew & Gregory Stump. . Hungarian pronominal case and the dichotomy of content and form in inflectional morphology. Natural Language and Linguistic Theory . –. Spencer, Andrew & Arnold Zwicky (eds.). . The handbook of morphology. London: Blackwell. Sproat, Richard. . On deriving the lexicon. Cambridge, MA: MIT PhD dissertation. Sproat, Richard. . Bracketing paradoxes, cliticization and other topics: The mapping between syntactic and phonological structure. In Martin Everaert, Arnold Evers, Riny Huybregts, & Mieke Trommelen (eds.), Morphology and modularity, –. Dordrecht: Foris. Sproat, Richard. . Unhappier is not a ‘bracketing paradox’. Linguistic Inquiry . –. Stampe, David. . The acquisition of phonetic representation. In Robert I. Binnick, Alice Davison, Georgia Green, & Jerry L. Morgan (eds.), Papers from the Fifth Regional Meeting of the Chicago Linguistic Society, –. Chicago: Chicago Linguistic Society. Stampe, David. . A dissertation on natural phonology. Chicago: University of Chicago PhD dissertation [appeared in New York: Garland, ]. Starke, Michal. . Nanosyntax: A short primer to a new approach to language. Nordlyd: Tromsø University Working Papers on Language & Linguistics (). –. Special issue on Nanosyntax edited by Peter Svenonius, Gillian Ramchand, Michal Starke, & Knut Tarald Taraldsen. Starke, Michal. . Towards elegant parameters: Language variation reduces to the size of lexically stored trees. Available at http://ling.auf.net/lingbuzz/. Steele, Susan. . Word order variation: A typological study. In Joseph H. Greenberg, Charles A. Ferguson, & Edith A. Moravcsik (eds.), Universals of Human Language: IV: Syntax, –. Stanford, CA: Stanford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Steele, Susan. . Towards a theory of morphological information. Language . –. Steinkrüger, Patrick. . Morphological processes of word-formation in Chabacano (Philippine Spanish Creole). In Ingo Plag (ed.), Phonology and morphology of creole languages. Tübingen: Niemeyer, –. Štekauer, Pavol. a. English word formation. Tübingen: Gunter Narr. Štekauer, Pavol. b. Beheading the word: Please stop the execution. Folia Linguistica (–). –. Štekauer, Pavol. a. Onomasiological approach to word-formation. In Pavol Štekauer & Rochelle Lieber (eds.), Handbook of word-formation, –. Dordrecht: Springer. Štekauer, Pavol. b. Meaning predictability in word formation. Amsterdam/Philadelphia: John Benjamins. Štekauer, Pavol. . Derivational paradigms. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press. Stemberger, Joseph P. & Brian MacWhinney. . Are inflected forms stored in the lexicon? In Michael Hammond & Michael Noonan (eds.), Theoretical morphology: Approaches in modern linguistics, –. London: Academic Press. Stephany, Ursula & Maria D. Voeikova. . Development of nominal inflection in first language acquisition: A cross-linguistic perspective. Berlin: De Gruyter Mouton. Stephens, Carolyn J. . Phonological parameters of indigenous and ASL country name-signs. Journal of Interpretation (). –. Steriade, Donca. . Reduplication and syllable transfer in Sanskrit and elsewhere. Phonology . –. Stevens, Christopher M. . Revisiting the affixoid debate: On the grammaticalization of the word. In Torsten Leuschner, Tanja Mortelmans, & Sarah De Groodt (eds.), Grammatikalisierung im Deutschen, –. Berlin: De Gruyter Mouton. Stewart, Thomas. . Contemporary morphological theories. A user’s guide. Edinburgh: Edinburgh University Press. Stewart, Thomas & Gregory Stump. . Paradigm Function Morphology and the morphology/ syntax interface. In Gillian Ramchand & Charles Reiss (eds.), The Oxford handbook of linguistic interfaces, –. Oxford: Oxford University Press. Stokoe, Wiliam C. Jr. . Sign language structure: An outline of the visual communication systems of the American deaf (Studies in linguistics: Occasional papers ). Buffalo: Dept. of Anthropology and Linguistics, University of Buffalo. Stolz, Thomas. . Pleonastic morphology dies hard: Change and variation of definiteness inflection in Lithuanian. In Franz Rainer, Wolfgang U. Dressler, Dieter Kastofsky, & Hans Christian Luschützky (eds.), Variation and change in morphology, –. Amsterdam/Philadelphia: John Benjamins. Stonham, John. . Combinatorial morphology. Amsterdam/Philadelphia: John Benjamins. Stork, Yvonne. . Die Sprachökonomie im  Jahrhundert. Zur Ausdifferenzierung eines Konzepts. In Gerda Hassler & Gemma Volkmann (eds.), History of linguistics in texts and concepts. Geschichte der Sprachwissenschaft in Texten und Konzepten, vol. , –. Münster: Nodus. Stump, Gregory. . Breton inflection and the split morphology hypothesis. In Randall Hendrick (ed.), Syntax and semantics : The syntax of the Modern Celtic languages, –. San Diego, CA: Academic Press. Stump, Gregory. . A paradigm-based theory of morphosemantic mismatches. Language . –. Stump, Gregory. . On the theoretical status of position class restrictions on inflectional affixes. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Stump, Gregory. a. On rules of referral. Language (). – [reprinted in Francis Katamba (ed.) . Morphology: Critical concepts in linguistics. London: Routledge]. Stump, Gregory. b. Position classes and morphological theory. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Stump, Gregory. . The uniformity of head marking in inflectional morphology. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Stump, Gregory. . Template morphology and inflectional morphology. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Stump, Gregory. . Inflection. In Andrew Spencer & Arnold M. Zwicky (eds.), The handbook of morphology, –. Oxford: Blackwell. Stump, Gregory. . Inflectional morphology: A theory of paradigm structure. Cambridge: Cambridge University Press. Stump, Gregory. . Morphological and syntactic paradigms: Arguments for a theory of paradigm linkage. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Stump, Gregory. . Content-paradigms and form-paradigms. Paper presented at the CID/M Meeting on Possible Words, University of Surrey,  July . Stump, Gregory. a. Referrals and morphomes in Sora verb inflection. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Springer. Stump, Gregory. b. Rules about paradigms. In C. Orhan Orgun & Peter Sells (eds.), Morphology and the web of grammar: Essays in memory of Steven G. Lapointe, –. Stanford, CA: CSLI. Stump, Gregory. c. Some criticisms of Carstairs-McCarthy’s conclusions. In Geert Booij & Jaap van Marle (eds.), Yearbook of morphology , –. Dordrecht: Springer. Stump, Gregory. d. Word-formation and inflectional morphology. In Pavol Štekauer & Rochelle Lieber (eds.), Handbook of word-formation, –. Dordrecht: Springer. Stump, Gregory. a. Template morphology. In Keith Brown (ed.), Encyclopedia of language and linguistics, vol. , –. Oxford: Elsevier. Stump, Gregory. b. Heteroclisis and paradigm linkage. Language (). –. Stump, Gregory. . A non-canonical pattern of deponency and its implications. In Matthew Baerman, Greville G. Corbett, Dunstan Brown, & Andrew Hippisley (eds.), Deponency and morphological mismatches, –. Oxford: British Academy and Oxford University Press. Stump, Gregory. . Cells and paradigms in inflectional semantics. In Erhard Hinrichs & John Nerbonne (eds.), Theory and evidence in semantics, –. Stanford, CA: CSLI. Stump, Gregory. . Interactions between defectiveness and syncretism. In Matthew Baerman, Greville G. Corbett, & Dunstan Brown (eds.), Defective paradigms: Missing forms and what they tell us, –. Oxford: British Academy and Oxford University Press. Stump, Gregory. . The formal and functional architecture of inflectional morphology. In Angela Ralli, Geert Booij, Sergio Scalise, & Athanasios Karasimos (eds.), Morphology and the architecture of grammar: On-line proceedings of the Eighth Mediterranean Morphology Meeting (MMM), –. Stump, Gregory. . Periphrasis in the Sanskrit verb system. In Marina Chumakina & Greville G. Corbett (eds.), Periphrasis: The role of syntax and morphology in paradigms, –. Oxford: Oxford University Press. Stump, Gregory. a. Morphosyntactic property sets at the interface of inflectional morphology, syntax and semantics. Lingvisticæ Investigationes (). –. Special issue on Morphology and its interfaces: Syntax, semantics and the lexicon edited by Dany Amiot, Delphine Tribout, Natalia Grabar, Cédric Patin, & Fayssal Tayalati. Stump, Gregory. b. Polyfunctionality and inflectional economy. Linguistic Issues in Language Technology (). –. Stump, Gregory. . The interface of semantic interpretation and inflectional realization. In Laurie Bauer, Lívia Körtvélyessy, & Pavol Štekauer (eds.), Semantics of complex words, –. Dordrecht: Springer. Stump, Gregory. a. Inflectional paradigms: Content and form at the syntax–morphology interface. Cambridge: Cambridge University Press. Stump, Gregory. b. Morphomic categories and the realization of morphosyntactic properties. In Ana Luís & Ricardo Bermúudez-Otero (eds.), The morphome debate, –. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Stump, Gregory. c. Paradigms at the interface of a lexeme’s syntax and semantics with its inflectional morphology. In Daniel Siddiqi & Heidi Harley (eds.), Morphological metatheory, –. Amsterdam/Philadelphia: John Benjamins. Stump, Gregory & Raphael A. Finkel. . Stem alternations and principal parts in French verb inflection. Paper presented at Décembrettes : Colloque International de Morphologie ‘Morphologie et classes flexionnelles’, Université de Bordeaux, France. – December . Stump, Gregory & Raphael A. Finkel. . Morphological typology: From word to paradigm. Cambridge: Cambridge University Press. Supalla, Samuel J. . The book of name signs: Naming in American Sign Language. San Diego, CA: DawnSign Press. Supalla, Ted. . The classifier system in American Sign Language. In Colette Craig (ed.), Noun classes and categorization, –. Amsterdam/Philadelphia: John Benjamins. Supalla, Ted & Elissa Newport. . How many seats in a chair? The derivation of nouns and verbs in American Sign Language. In Patricia Siple (ed.), Understanding language through sign language research, –. New York, NY: Academic Press. Suthar, Babubhai Kohyabhai. . Agreement in Gujarati. Philadelphia: University of Pennsylvania PhD dissertation. Sutton-Spence, Rachel & Donna Jo Napoli. . Humour in sign languages: The linguistic underpinnings. Dublin: Centre for Deaf Studies, Trinity College Dublin. Sutton-Spence, Rachel & Bencie Woll. . The linguistics of British Sign Language: An introduction. Cambridge: Cambridge University Press. Sutton-Spence, Rachel, Bencie Woll, & Lorna Allsop. . Variation and recent change in fingerspelling in British Sign Language. Language Variation and Change (). –. Švedova, Natalija Ju (ed.). . Russkaja grammatika. Moscow: Akademija Nauk SSSR. Svenonius, Peter. . Interpreting uninterpretable features. Linguistic Analysis (/). –. Svenonius, Peter. . Spans and words. In Daniel Siddiqi & Heidi Harley (eds.), Morphological metatheory, –. Amsterdam/Philadelphia: John Benjamins. Svenonius, Peter, Gillian Ramchand, Michal Starke, & Knut Tarald Taraldsen (eds.). . Nanosyntax. Special Issue of Nordlyd: Tromsø University Working Papers on Language & Linguistics (). –. Taft, Marcus. . Lexical access via an orthographic code: The basic orthographic syllabic structure (BOSS). Journal of Verbal Learning and Verbal Behavior (). –. Taft, Marcus. . Prefix stripping revisited. Journal of Verbal Learning and Verbal Behavior (). –. Taft, Marcus. . Morphological decomposition and the reverse base frequency effect. Quarterly Journal of Experimental Psychology A. –. Taft, Marcus & Kenneth I. Forster. . Lexical storage and retrieval of prefixed words. Journal of Verbal Learning and Verbal Behavior (). –. Taft, Marcus & Kenneth I. Forster. . Lexical storage and retrieval of polymorphemic and polysyllabic words. Journal of Verbal Learning and Verbal Behavior (). –. Taft, Marcus, Xiaoping Zhu, & Danling Peng. . Positional specificity of radicals in Chinese character recognition. Journal of Memory and Language . –. Talmy, Leonard. . The relation of grammar to cognition: A synopsis. In David Waltz (ed.), Theoretical issues in natural language processing , –. New York: Association for Computing Machinery [revised version in Talmy, Leonard. . Toward a cognitive semantics. Cambridge, MA: MIT Press]. Taub, Sarah F. . Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press. Taylor, John R. . Cognitive grammar. Oxford: Oxford University Press. Taylor, John R. . Linguistic categorization: Prototypes in linguistic theory, rd edn. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Taylor, John R. . The Oxford handbook of the word. Oxford: Oxford University Press. Teržan-Kopecky, Karmen (ed.). . Zbornik referatov II. Mednarodnega Simpozija o Teoriji Naravnosti: . do . Maj  [Proceedings of the nd International Symposium on Naturalness Theory]. Maribor: Univerza v Mariboru, Pedagoška Fakulteta. Testelets, Yakov G. . Russian works on linguistic typology in the –s. In Martin Haspelmath, Ekkehard König, Wulf Oesterreicher, & Wolfgang Raible (eds.), Language typology and language universals. An international handbook, vol. , –. Berlin: De Gruyter Mouton. Theakston, Anna, Elena Lieven, & Michael Tomasello. . The role of input in the acquisition of third-person singular verbs in English. Journal of Speech, Language, and Hearing Research . –. Thomas, Michael S. C. & Annette Karmiloff-Smith. . Modelling language acquisition in atypical phenotypes. Psychological Review (). –. Thomason, Sarah. . Language contact. Edinburgh: Edinburgh University Press. Thornton, Anna M. . Sui deverbali italiani in -mento e -zione (I). Archivio Glottologico Italiano (). –. Thornton, Anna M. . Some reflections on gender and inflectional class assignment in Italian. In Chris Schaner-Wolles, John Rennison, & Friedrich Neubarth (eds.), Naturally! Linguistic studies in honour of Wolfgang Ulrich Dressler presented on the occasion of his th birthday, –. Torino: Rosenberg & Sellier. Thornton, Anna M. . Overabundance (multiple forms realizing the same cell): A non-canonical phenomenon in Italian verb morphology. In Martin Maiden, John Charles Smith, Maria Goldbach, & Marc-Olivier Hinzelin (eds.), Morphological autonomy: Perspectives from Romance inflectional morphology, –. Oxford: Oxford University Press. Thornton, Anna M. . Reduction and maintenance of overabundance: A case study on Italian verb paradigms. Word Structure (). –. Thornton, Anna M. . La non canonicità del tipo It. braccio//braccia/bracci: Sovrabbondanza, difettività o iperdifferenziazione? Studi di grammatica italiana –. –. Thornton, Anna M. . Un capitolo di storia della terminologia grammaticale italiana: il termine sovrabbondante. In Francesco Dedè (ed.), Categorie grammaticali e classi di parole. Statuto e riflessi metalinguistici, –. Roma: Il Calamo. Timmer, Kalinka & Niels O. Schiller. . Neural correlates reveal sub-lexical orthography and phonology during reading aloud: A review. Frontiers in Psychology . . Togeby, Knud. . Structure immanente de la langue française. Paris: Librairie Larousse. Tokuda, Masaaki & Manabu Okumura. . Towards automatic translation from Japanese into Japanese Sign Language. In Vibhu O. Mittal, Holly A. Yanco, John Aronis, & Richard Simpson (eds.), Assistive technology and artificial intelligence, –. Berlin/Heidelberg: Springer. Toman, Jindrich. . Wortsyntax. Tübingen: Max Niemeyer. Tomasello, Michael. . Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press. Tonelli, Livia & Wolfgang U. Dressler (eds.). . Natural Morphology: Perspectives for the nineties. Selected papers from the workshop at the fifth international morphology meeting. Padova: Unipress. Trager, George L. . The verb morphology of spoken French. Language . –. Traugott, Elizabeth & Graeme Trousdale. . Constructionalization and constructional changes. Oxford: Oxford University Press. Travis, Lisa. . Parameters and effects of word order variation. Cambridge, MA: MIT PhD dissertation. Traxler, Matthew & Morton Ann Gernsbacher (eds.). . Handbook of psycholinguistics. San Diego, CA: Academic Press. Triantaphyllides, Manolis. . I vuleftina ke o sximatismos ton thilikon epangelmatikon usiastikon [Vuleftina and the formation of female professional nouns]. In Apanta Triantaphyllidi, vol. , –. Thessaloniki: Manolis Triantaphyllides Foundation. Trips, Carola. . Lexical semantics and diachronic morphology: The development of -hood, ‑dom and ‑ship in the history of English. Tübingen: Niemeyer.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Trnka, Bohumil et al. . Prague Structural Linguistics. In Josef Vachek (ed.), A Prague School Reader in Linguistics, –. Bloomington: Indiana University Press. Trommer, Jochen. . Morphology consuming syntax’s resources: Generation and parsing in a Minimalist version of Distributed Morphology. In Christian Retoré & Edward Stabler (eds.), Proceedings of the ESSLI Workshop on Resource Logics and Minimalist Grammars. Utrecht: Association for Logic, Language, and Information. Trommer, Jochen. . Distributed optimality. Potsdam: University of Potsdam PhD dissertation. Trommer, Jochen. . Phonological aspects of Western Nilotic mutation morphology. Leipzig: University of Leipzig Habilitation thesis. Trommer, Jochen (ed.). . The morphology and phonology of exponence. Oxford: Oxford University Press. Trousdale, Graeme. . A constructional approach to lexicalization processes in the history of English: Evidence from possessive constructions. Word Structure (). –. Trubetzkoy, Nikolay S. . Principles of phonology. English translation by Christiane A. M. Baltaxe. Berkeley & Los Angeles: University of California Press. Trudgill, Peter. . Sociolinguistic typology and complexification. In Geoffrey Sampson, David Gil, & Peter Trudgill (eds.), Language complexity as an evolving variable, –. Oxford: Oxford University Press. Tseng, Jesse. . Edge features and French liaison. In Jong-Bok Kim & Stephen Wechsler (eds.), Proceedings of HPSG . Stanford, CA: CSLI. Tsimpli, Ianthi Maria & Maria Dimitrakopoulou. . The Interpretability hypothesis: Evidence from wh- interrogatives in second language acquisition. Second Language Research (). –. Tucker, Matthew A. . Iraqi Arabic verbs: The need for roots and prosody. In Mary Byram Washburn, Katherine McKinney-Bock, Erika Varis, Ann Sawyer, & Barbara Tomaszewicz (eds.), Proceedings of the th West Coast Conference on Formal Linguistics, –. Somerville, MA: Cascadilla Press. Tuggy, David. . The affix-stem distinction: A Cognitive Grammar analysis of data from Orizaba Náhuatl. Cognitive Linguistics . –. Tuggy, David. . Dissimilation in Mösiehuali (Tetelcingo Nahuatl): A Cognitive Grammar perspective. In Augusto Soares da Silva, Amadeu Torres, & Miguel Gonçalves (eds.), Linguagem, cultura e cognição: Estudios de linguística cognitiva, vol. I, –. Coimbra: Almedina. Tumbahang, Govinda Bahadur. . A descriptive grammar of Chhatthare Limbu. Kirtipur: Tribhuvan University PhD dissertation. Tumtavikul, Apiluck, Chirapha Niwataphant, & Philipp Dill. . Classifiers in Thai Sign Language. SKASE Journal of Theoretical Linguistics (). –. Tyler, Lorraine K., William D. Marslen-Wilson, & Rachelle Waksler. . Representation and access of derived words in English. In Gerry T. M. Altmann & Richard Shillcock (eds.), Cognitive models of speech processing: The second Sperlonga meeting, –. Hove: Erlbaum. Uchimoto, Kiyotaka, Satoshi Sekine, & Hitoshi Isahara. . The unknown word problem: A morphological analysis of Japanese using maximum entropy aided by a dictionary. In Lillian Lee & Donna Harman (eds.), Proceedings of the  Conference on Empirical Methods in Natural Language Processing, –. Association for Computational Linguistics. Ullman, Michael T. . The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition (). –. Ullman, Michael T. . Contributions of memory circuits to language: The declarative/procedural model. Cognition . –. Ullman, Michael T. . A cognitive neuroscience perspective on second language acquisition: The declarative/procedural model. In Cristina Sanz (ed.), Mind and context in adult second language acquisition: Methods, theory and practice, –. Washington, DC: Georgetown University Press. Urbanczyk, Suzanne. . Patterns of reduplication in Lushootsheed. Amherst: University of Massachusetts-Amherst PhD dissertation.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Urbanczyk, Suzanne. . Reduplicative form and the root-affix asymmetry. Natural Language and Linguistic Theory . –. Urbanczyk, Suzanne. . Reduplication. In Paul de Lacy (ed.), The Cambridge handbook of phonology, –. Cambridge: Cambridge University Press. Ussishkin, Adam P. . The emergence of fixed prosody. Santa Cruz: University of CaliforniaSanta Cruz PhD dissertation. Ussishkin, Adam P. . Templatic effects as fixed prosody: The verbal system in Semitic. In Jacqueline Lecarme (ed.), Research in Afroasiatic grammar II, –. Amsterdam/Philadelphia: John Benjamins. Vaan, Laura de, Robert Schreuder, & Harald Baayen. . Regular morphologically complex neologisms leave detectable traces in the mental lexicon. The Mental Lexicon (). –. Vachek, Josef. . The Linguistic School of Prague: An introduction to its theory and practice. Bloomington: Indiana University Press. Vainikka, Anne & Martha Young-Scholten. . Partial transfer, not partial access. Behavioral and Brain Sciences . –. Vainikka, Anne & Martha Young-Scholten. . The acquisition of German: Introducing organic grammar. Berlin: De Gruyter Mouton. Vajda, Edward. . Yeniseian. In Rochelle Lieber & Pavol Štekauer (eds.), The Oxford handbook of derivational morphology, –. Oxford: Oxford University Press. Valdman, Albert. . Le créole: structure, statut et origine. Paris: Klincksieck. Van de Velde, Mark. . The Bantu connective construction. In Anne Carlier & Jean-Christophe Verstraete (eds.), The genitive, –. Amsterdam/Philadelphia: John Benjamins. Van den Toorn, Maarten C. . Van godevolen tot computergestuurd. Spektator (). –. Van der Lely, Heather & Michael Ullman. . Past tense morphology in specifically language impaired and normally developing children. Language and Cognitive Processes . –. Van Jaarsveld, Henk J. & Gilbert E. Rattink. . Frequency effects in the processing of lexicalized and novel nominal compounds. Journal of Psycholinguistic Research . –. Van Valin, Robert & Randy LaPolla. . Syntax: Structure, meaning and functions. Cambridge: Cambridge University Press. Vanderwende, Lucy. . Algorithm for automatic interpretation of noun sequences. In Proceedings of the th International Conference on Computational Linguistics (COLING-), –. Kyoto, Japan. Veenstra, Tonjes. . Serial verbs in Saramaccan: Predication and creole genesis (HIL dissertations ). Dordrecht: ICG Printing. Veenstra, Tonjes. . What verbal morphology can tell us about creole genesis: The case of Frenchrelated creoles. In Ingo Plag (ed.), The morphology of creole languages. Special section of Yearbook of morphology , –. Dordrecht: Foris. Vennemann, Theo. . Phonetic analogy and conceptual analogy. In Theo Vennemann & Terence H. Wilbur (eds.), Schuchardt, the Neogrammarians, and the transformational theory of phonological change, –. Frankfurt: Athenäum. Vennemann, Theo. . Theories of linguistic preferences as a basis for linguistic explanations. Folia Linguistica Historica . –. Vennemann, Theo. . Language change as language improvement. In Charles Jones (ed.), Historical linguistics: Problems and perspectives, –. London: Longman. Verdonschot, Rinus G., Renee Middelburg, Saskia E. Lensink, & Niels O. Schiller. . Morphological priming survives a language switch. Cognition . –. Verhoeven, Ben, Walter Daelemans, Menno van Zaanen, & Gerhard van Huyssteen. . ComAComA : Proceedings of the First Workshop on Computational Approaches to Compound Analysis. Held at the th International Conference on Computational Linguistics (COLING ). Dublin, Ireland, August , . Glasnevin: Dublin City University (DCU) & Stroudsburg, PA: Association for Computational Linguistics. Vermeerbergen, Myriam, Lorraine Leeson, & Onno Alex Crasborn (eds.). . Simultaneity in signed languages: Form and function. Amsterdam/Philadelphia: John Benjamins.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Veselinova, Ljuba. a. Suppletion according to tense and aspect. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World atlas of language structures, –. Oxford: Oxford University Press. Veselinova, Ljuba. b. Verbal number and suppletion. In Matthew Dryer, Martin Haspelmath, David Gil, & Bernard Comrie (eds.), World atlas of language structures, –. Oxford: Oxford University Press. Veselinova, Ljuba N. . Suppletion in verb paradigms: Bits and pieces of a puzzle. Stockholm: Stockholm University PhD dissertation. Veselinova, Ljuba N. . Suppletion in verb paradigms: Bits and pieces of the puzzle. Amsterdam/ Philadelphia: John Benjamins. Vigliocco, Gabriella, Pamela Perniss, & David Vinson. . Language as a multimodal phenomenon: Implications for language learning, processing and evolution. Philosophical Transactions of the Royal Society B: Biological Sciences (). . Voeikova, Maria D. & Wolfgang U. Dressler (eds.). . Pre- and protomorphology, Munich: LINCOM. Volpe, Mark. . Japanese morphology and its theoretical consequences: Derivational morphology in Distributed Morphology. New York: Stony Brook University PhD dissertation. Vos, Connie de. . Mouth gestures in Kata Kolok. Talk presented at SignNonmanuals—Workshop on functions of nonmanuals in sign languages. Centre for Sign Language and Deaf Communication (ZGH), Alpen-Adria-University Klagenfurt, Austria, – April . Waksler, Rachelle. . Morphological systems and structure in language production. In Linda R. Wheeldon (ed.), Aspects of language production, –. Hove: Psychology Press/Taylor & Francis. Wallin, Lars. . Polysynthetic signs in Swedish Sign Language. Stockholm: Stockholm University PhD dissertation. Wallingford, Sophia. . The pluralisation of nouns in New Zealand Sign Language. Wellington Working Papers in Linguistics . – [http://www.victoria.ac.nz/lals/resources/publications/wwp/ WWPv.pdf., accessed  August ]. Walther, Géraldine. . Fitting into morphological structure: Accounting for Sorani Kurdish endoclitics. In Angela Ralli, Geert Booij, Sergio Scalise, & Athanasios Karasimos (eds.), Morphology and the architecture of grammar: On-line proceedings of the Eighth Mediterranean Morphology Meeting (MMM), –. Walther, Géraldine. . Sur la canonicité en morphologie. Pespective empirique, formelle et computationnelle. Paris: Université Paris Diderot (Paris ) PhD dissertation. Ward, Gregory, Richard Sproat, & Gail McKoon. . A pragmatic analysis of so-called anaphoric islands. Language (). –. Warren, Beatrice. . Semantic patterns of noun-noun compounds. Göteborg: Acta Universtatis Gothoburgensis. Wasow, Thomas. . Transformations and the lexicon. In Peter Culicover, Thomas Wasow, & Adrian Akmajian (eds.), Formal syntax, –. New York: Academic Press. Weinberger, Stephen. . Theoretical foundations of second language phonology. PhD dissertation, University of Washington. Weinreich Uriel, William Labov, & Marvin Herzog. . Empirical foundations for a theory of language change. In Winfred P. Lehmann & Yakov Malkiel (eds.), Directions for historical linguistics, –. Austin: University of Texas Press. Weist, Richard M. . Tense and aspect. In Paul Fletcher & Michael Garman (eds.), Language acquisition, –. Cambridge: Cambridge University Press. Wescoat, Michael. . On lexical sharing. Stanford: Stanford University PhD dissertation. Wexler, Kenneth. . Optional infinitives, head movement and the economy of derivations in child grammar. In David Lightfoot & Norbert Hornstein (eds.), Verb movement, –. Cambridge: Cambridge University Press. Wexler, Kenneth. . Very early parameter setting and the Unique Checking Constraint: A new explanation of the optional infinitive stage. Lingua . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





White, Lydia. . Second language acquisition and universal grammar. Cambridge: Cambridge University Press. Whitney, William Dwight. . Sanskrit grammar, nd edn. Cambridge, MA: Harvard University Press. Whorf, Benjamin Lee. . Grammatical categories. Language . –. Wierzbicka, Anna. . Dictionaries vs. encyclopaedias: How to draw the line. In Philip W. Davis (ed.), Alternative linguistics: Descriptive and theoretical modes, –. Amsterdam/Philadelphia: John Benjamins. Wiese, Richard. . Phrasal compounds and the theory of word syntax. Linguistic Inquiry . –. Wijk, Judith van. . The acquisition of the Dutch plural. Utrecht: Utrecht University PhD dissertation. Wilbur, Ronnie B. . The phonology of reduplication. Champaign-Urbana: University of Illinois PhD dissertation (available from Indiana University Linguistics Club, Bloomington, IN). Wilbur, Ronnie B. . Phonological and prosodic layering of nonmanuals in American Sign Language. In Karen Emmorey & Harlan Lane (eds.), The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima, –. Mahwah, NJ: Erlbaum. Wilbur, Ronnie B. . Productive reduplication in a fundamentally monosyllabic language. Language Sciences (). –. Wilcox, Phyllis Perrin. . Metaphor in American Sign Language. Washington, DC: Gallaudet University Press. Wilcox, Sherman. . The phonetics of fingerspelling, vol. . Amsterdam/Philadelphia: John Benjamins. Wilcox, Sherman. . Routes from gesture to language. In Elena Pizzuto, Paola Pietrandrea, & Raffaele Simone (eds.), Verbal and signed languages: Comparing structures, constructs and methodologies, –. Berlin: De Gruyter Mouton. Williams, Edwin. . On the notions ‘lexically related’ and ‘head of a word’. Linguistic Inquiry (). –. Williams, Edwin. . Grammatical relations. Linguistic Inquiry (). –. Williams, Edwin. . Representation theory. Cambridge, MA: MIT Press. Wilson, Margaret. . The case of sensorimotor coding in working memory. Psychonomic Bulletin and Review (). –. Wilss, Wolfram. . Wortbildungstendenzen in der deutschen Gegenwartssprache: Theoretische Grundlagen-Beschreibung-Anwendung (Tübinger Beiträge zur Linguistik ). Tübingen: Gunter Narr Verlag. Wisniewski, Edward J. . When concepts combine. Psychonomic Bulletin & Review . –. Wohlgemuth, Jan & Michael Cysouw (eds.). . Rethinking universals. How rarities affect linguistic theories. Berlin: De Gruyter Mouton. Woll, Bencie. . Visual imagery and metaphor in British Sign Language. In Wolf Paprotté & René Dirven (eds.), The ubiquity of metaphor: Metaphor in language and thought, –. Amsterdam/ Philadelphia: John Benjamins. Woodward, James & Susan De Santis. . Negative incorporation in French and American Sign Language. Language in Society (). –. Wouk, Fay. . The syntax of repair in Indonesian. Discourse Studies (). –. Wunderlich, Dieter. . A minimalist analysis of German verb morphology. Arbeiten des SFB  (Theorie des Lexikons) . University of Düsseldorf. Wunderlich, Dieter. . Minimalist morphology: The role of paradigms. In Geert Booij & Jaap van Marle (eds.), Yearbook of Morphology , –. Dordrecht: Kluwer. Wunderlich, Dieter & Ray Fabri. . Minimalist morphology: An approach to inflection. Zeitschrift für Sprachwissenschaft (). –. Wurzel, Wolfgang U. . Zur Stellung der Morphologie im Sprachsystem. Linguistische Studien A (Arbeitsberichte). –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Wurzel, Wolfgang U. . Flexionsmorphologie und Natürlichkeit. Berlin: Akademie-Verlag [English translation: Inflectional morphology and naturalness. Dordrecht: Kluwer, ]. Wurzel, Wolfgang U. . System-dependent morphological naturalness in inflection. In Wolfgang U. Dressler, Willi Mayerthaler, Oskar Panagl, & Wolfgang U. Wurzel (eds.), Leitmotifs in Natural Morphology, –. Amsterdam/Philadelphia: John Benjamins. Wurzel, Wolfgang U. . Derivation, Flexion und Blockierung. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung (). –. Wurzel, Wolfgang U. . Inflectional class markedness. In Olga M. Tomić (ed.), Markedness in synchrony and diachrony, –. Berlin: De Gruyter Mouton. Wurzel, Wolfgang U. a. Natural Morphology. In Ron E. Asher (ed.), The encyclopedia of language and linguistics, vol. . Oxford/New York/Seoul/Tokyo: Elsevier. Wurzel, Wolfgang U. b. Grammatisch initiierter Wandel. Bochum: Brockmeyer. Wurzel, Wolfgang U. . On the similarities and differences between inflectional and derivational morphology. Sprachtypologie und Universalienforschung (). –. Wurzel, Wolfgang U. . Natürlicher grammatischer Wandel, ‘unsichtbare Hand’ und Sprachökonomie. Wollen wir wirklich so Grundverschiedenes? In Thomas Birkmann, Heinz Klingenberg, Damaris Nübling, & Elke Ronneberger-Sibold (eds.), Vergleichende germanische Philologie und Skandinavistik. Festschrift für Otmar Werner, –. Tübingen: Niemeyer. Wurzel, Wolfgang U. . On markedness. Theoretical Linguistics (). –. Wurzel, Wolfgang U. . Is language change directed? A contribution to the theory of change. In Chris Schaner-Wolles, John Rennison, & Friedrich Neubarth (eds.), Naturally! Linguistic studies in honour of Wolfgang Ulrich Dressler presented on the occasion of his th birthday, –. Torino: Rosenberg & Sellier. Yakpo, Kofi. . Reiteration in Pichi: Forms, functions and areal-typological perspectives. In Enoch Aboh, Norval Smith, & Anne Zribi-Hertz (eds.), The morphosyntax of reiteration in creole and non-creole languages, –. Amsterdam/Philadelphia: John Benjamins. Yang, Charles. . On productivity. Yearbook of language variation . –. Yarlett, Daniel. . Language learning through similarity-based generalization. Stanford: Stanford University PhD dissertation. Yip, Moira. . Identity avoidance in phonology and morphology. In Steven G. Lapointe, Diane K. Brentari, & Patrick M. Farrell (eds.), Morphology and its relation to phonology and syntax, –. Stanford, CA: CSLI. Young, Robert. . The primary verb in Bena-bena. In Alan Pence (ed.), Verb studies in five New Guinea languages [SIL Publications in Linguistics ], –. Norman, OK: Summer Institute of Linguistics. Yu, Alan C. L. . A natural history of infixation. Oxford: Oxford University Press. Yvon, François & Vito Pirrelli. . The hidden dimension: A paradigmatic view of data-driven NLP. Journal of Experimental and Theoretical Artificial Intelligence . –. Zaliznjak, Andrej A. . Grammatičeskij slovar’ russkogo jazyka. Moscow: Russkij jazyk. Zasorina, Lidija N. . Častotnyj slovar’ russkogo jazyka. Moscow: Russkij jazyk. Zeller, Jochen. . Moved preverbs in German: Displaced or misplaced? In Geert Booij & Ans van Kemenade (eds.), Preverbs, –. Special Issue of Yearbook of Morphology . Dordrecht: Kluwer. Zepeda, Ofelia. . A Papago grammar. Tucson: University of Arizona Press. Zeshan, Ulrike. . Sign language in Indo-Pakistan: A description of a signed language. Amsterdam/Philadelphia: John Benjamins. Zeshan, Ulrike. . Towards a notion of ‘word’ in sign languages. In R. M. W. Dixon & Alexandra Y. Aikhenvald (eds.), Word: A cross-linguistic typology, –. Cambridge: Cambridge University Press. Zhang, Xiang & Yann LeCun. . Text understanding from scratch. arXiv preprint arXiv: .. Zimmer, Karl. . Affixal negation in English and other languages: An investigation of restricted productivity. New York: Supplement to Word, Monograph .

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi





Zipf, George K. . Human behavior and the principle of least effort: An introduction to human ecology. Cambridge, MA: Addison-Wesley Press. Zirkel, Linda. . Prefix combinations in English: Structural and processing factors. Morphology (). –. Zobl, Helmut & Juana Liceras. . Functional categories and acquisition orders. Language Learning . –. Zubicaray, Greig I. de & Katie L. McMahon. . Auditory context effects in picture naming investigated with event-related fMRI. Cognitive, Affective and Behavioral Neuroscience . –. Žukova, Alevtina N. . Grammatika korjakskogo jazyka. Leningrad: Nauka. Zwicky, Arnold M. . On clitics. Bloomington: Indiana University Linguistics Club. Zwicky, Arnold M. . ‘Reduced words’ in highly modular theories: Yiddish anarthrous locatives reexamined. Ohio State University Working Papers in Linguistics . –. Zwicky, Arnold M. a. How to describe inflection. In Mary Niepokuj, Mary Van Clay, Vassiliki Nikiforidou, & Deborah Feder (eds.), Proceedings of the Eleventh Annual Meeting of the Berkeley Linguistics Society, –. Berkeley: Berkeley Linguistics Society. Zwicky, Arnold M. b. Rules of allomorphy and phonology–syntax interactions. Journal of Linguistics . –. Zwicky, Arnold M. c. Heads. Journal of Linguistics (). –. Zwicky, Arnold M. . Inflectional morphology as a (sub)component of grammar. In Wolfgang U. Dressler, Hans C. Luschützky, Oskar E. Pfeiffer, & John R. Rennison (eds.), Contemporary morphology, –. Berlin: De Gruyter Mouton. Zwicky, Arnold M. . Some choices in the theory of morphology. In Robert Levine (ed.), Formal grammar: Theory and implementation, –. New York and Oxford: Oxford University Press. Zwitserlood, Pienie. . The role of semantic transparency in the processing and representation of Dutch compounds. Language and Cognitive Processes . –. Zwitserlood, Pienie. . Sublexical and morphological information in speech processing. Brain and Language . –. Zwitserlood, Pienie, Jens Bölte, & Petra Dohmes. . Morphological effects on speech production: Evidence from picture naming. Language and Cognitive Processes . –. Zwitserlood, Pienie, Jens Bölte, & Petra Dohmes. . Where and how morphologically complex words interplay with naming pictures. Brain and Language . –.

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

L ANGUAGE I NDEX .................................................... Note: ‘English’ is not indexed due to the frequency of its appearance. Figures, tables, and notes are indicated by an italic f, t, and n respectively, following the page number. Adamorobe Sign Language ,  Adyghe , –, ,  Afrikaans , , –,  Aivaliot –, nn, –, – American Sign Language (ASL) , f, –, f, , –, f, , f Ancient Greek , , –, , n,  Arabic , –, t, , ,  Classical – Iraqi n Arapesh  Australian languages , , –,  see also Diyari; Kayardild; Murrinhpatha; Wambaya; Warlpiri; Wubuy Australian Sign Language , – Austrian Sign Language  Austronesian –, ,  Basque  Bena-bena –, –t Berbice Dutch , –, –, ,  Bininj Gun-Wok t Brazilian Sign Language  Breton –, t British Sign Language – Cappadocian – Celtic languages  see also Breton Central Siberian Yupik  Chabacano  Chamorro –,  Chhatthare Limbu – Chichewa (or Chicheŵa) , , –,  Chinese , , – Chintang , , – Chukchee – Chukotko-Kamchatkan  see also Chukchee; Koryak Circassian  see also Adyghe Cochin Indo-Portuguese  Cupeño  Cushitic  Cypriot 

Dalabon  Dinka , t,  Diyari ,  Dutch , , , , , , , , , , –, , –, , , –, n, , –, , , , –,  see also Afrikaans; Berbice Dutch; Netherlands Sign Language Early Sranan –, ,  Erzya see Mordvin Eskimo-Aleut  Estonian –, t, , – Estonian Sign Language  Ewe  Finnish –,  Fongbe  French –, , , , , , , , , , –, , –, , , –, , –n, , , –, –, , , –, , , , – Gbe  German , , , , –, , –, , , , –, , –, –, –, –, n, –, , , , –, –, , –, f Franconian  see also Middle High German; Old High German German Sign Language  Gorokan  Greek see Ancient Greek; Modern Greek; Standard Modern Greek Grekaniko – Greko – Griko  Guinea-Bissau Kriyol , , , – Gunwinyguan  Haitian , , , – Hawai’i Creole  Hebrew , ,  see also Modern Hebrew Heptanesian  Hua –, t, t

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi



 

Hungarian , ,  Hungarian Sign Language 

Nuer , –, , , t Nupe , –

Indo-European , , , , , ,  Indo-Portuguese creoles n,  see also Korlai Indo-Portuguese Iranian see Persian Israeli Sign Language – Italian , , , , , , –, , , –, –, , n, –, , , –, , , –, –, –, –, –, –,  see also Milanese Italian Sign Language , –, 

Old English  Old High German –,  Old Norse , , t

Jamaican , , – Japanese , , , , , , , , ,  Japanese Sign Language –

Romance , , , ,  Russian –, , , –, –, , , –, , , , t

Kabuverdianu , ,  Kambera ,  Kammu  Kata Kolok  Kayardild  Kigaya  Klao  Komnzo f Korean  Korlai Indo-Portuguese , , , –, t Koryak –, t, –, ,  Kriyol see Guinea-Bissau Kriyol Latin , , , t, –, t, –, , , , , , –, , , , , –, n, –, , , ,  Lesbian , – Lithuanian n Louisiana Creole  Luiseño n Maay  Macedonian –, , – Mauritian Creole n,  Medieval Asia-Minor Koine n Mexican Sign Language – Middle High German –,  Milanese –, t Modern Greek –, , , – see also Aivaliot; Cypriot; Grekaniko; Greko; Griko; Heptanesian; Standard Modern Greek Modern Hebrew , –, –, , –, ,  Mordvin – Murrinhpatha  Netherlands Sign Language – Niger-Congo ,  Noon –, t

Papiamentu ,  Papua New Guinea ,  Persian , ,  Polish , –, ,  Portuguese , n, –,  see also Korlai Indo-Portuguese

Samoan , , , – Sanskrit –, –, , , , , , , , ,  Semitic , –, , –, n, , , , , ,  Serbo-Croatian  Seri  Sino-Tibetan  SiSwati , –, ,  Slovene , ,  Southern Sotho  Spanish , , , , , –, , –, –, n, , , , , ,  Standard Modern Greek , – Swahili , , , –, n Swedish Sign Language  Sye – Tagalog –,  Tangkic  Tibeto-Burman  Tohono O’odham  Tok Pisin  Trans-New-Guinea  Tundra Nenets – Turkish , , n Udmurt  Uto-Aztecan , n,  see also Cupeño; Luiseño Venetian  Wambaya –,  Warlpiri  Welsh Romany  West-Caucasian see Adyghe Western Nilotic see Dinka; Nuer Wubuy  Yam –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

I NDEX OF N AMES .............................................. Note: f = figure. n = footnote. t = table. Abrahamsson, Niclas – Ackema, Peter , –, , – Ackerman, Farrell ,  Acquaviva, Paolo ,  Aeschylus  Akinlabi, Akinbiyi ,  Alber, Birgit  Albright, Adam n,  Alderete, John  Alexiadou, Artemis  Allen, Margaret R. ,  Allen, Mark  Almazán, Mayella  Alsina, Alex  Amiridze, Nino  Anderson, Stephen R. , , n, –, , , , , –, , , , –, , , , , , –, –, , – Andreou, Marios n,  Andresen, Julie Tetel n Andrews, Avery – Arcodia, Giorgio Francesco  Aristar, Anthony R.  Aristotle  Arkhangelskiy, Timofey  Arndt-Lappe, Sabine  Aronoff, Mark , , , , , , –, n, , , , n, , , n, , –, n, , , , –, , , , , , , , n, n, –, , , , , , ,  Baayen, Harald , , , n, , , , , , –, ,  Badecker, William  Baerman, Matthew , , –, –, ,  Baeskow, Heike ,  Baker, Mark C. , n Barðdal, Jóhanna  Baroni, Marco  Baudouin de Courtenay, Jan  Bauer, Laurie , , , , , , , , , , ,  Beard, Robert , , ,  Bellugi, Ursula  Benua, Laura , 

Bergs, Alexander  Beuzeville, Louise de  Beyersmann, Elisabeth  Bisang, Walter  Bittner, Andreas ,  Blevins, James P. , , , n, , , , , –, ,  Bloch, Bernard , , , , , , ,  Bloomfield, Leonard , , –, n, , , , –, –, , –, –, , , –, , , , , , , , –, ,  Boas, Franz , , , , , –, ,  Bochner, Harry , ,  Boeckx, Cedric  Bölte, Jens ,  Bonami, Olivier n, , –, , n, n, ,  Bonet, Eulàlia  Booij, Geert , , , , , , –, , , –, , –, n, , ,  Borer, Hagit , ,  Botha, Rudolf ,  Bowern, Claire  Boyé, Gilles , –, n,  Bozic, Mirjana  Braun, Maria ,  Brekle, Herbert  Bresnan, Joan , , , –, –, – Broadwell, George Aaron  Brousseau, Anne-Marie  Brown, Dunstan , ,  Brown, Roger ,  Burnham, Denis  Bush, George W.  Butt, Miriam ,  Butterworth, Brian  Bybee, Joan , ,  Bye, Patrick , , , ,  Carabelli, Simona – Caramazza, Alfonso  Carr, Charles T.  Carreiras, Manuel  Carstairs(-McCarthy), Andrew , , , t, 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



  

Caselli, M. Cristina  Castles, Anne  Chamoreau, Claudine  Chan, Cecilia Yuet-hung ,  Cheng, Chenxi  Cho, Young-Mee Yu  Chomsky, Noam , , –, , –, –, –, , –, –, –, , , , , , , n, , –, , , , , , n, n, n, , , , –, ,  Christianson, Kiel  Cinque, Guglielmo  Clahsen, Harald , , , , , , ,  Colé, Pascale  Collier, Scott –, t Coltheart, Max  Corbett, Greville G. , , , , , –, n, , , –, , , , , , –, , –, t,  Corbin, Danielle ,  Cowie, Claire – Cowper, Elizabeth , n Creider, Chet  Crepaldi, David  Croft, William  Crysmann, Berthold n, , –, ,  Cuervo, María Cristina  Cysouw, Michael ,  Dąbrowska, Ewa  Daelemans, Walter  Dalrymple, Mary ,  Dalton-Puffer, Christiane – Davis, Colin J. ,  Davis, Matthew H.  de Jong, Nivja H.  de Vaan, Laura n DeKeyser, Robert  Demske, Ulrike  Desmets, Marianne – Di Sciullo, Anna Maria , , , n, –, , n Diependaele, Kevin ,  Diewald, Gabriele  Dijkstra, Ton  Dima, Corina  Dimitrakopoulou, Maria  Dohmes, Petra ,  Dokulil, Miklos  Dresher, Elan  Dressler, Wolfgang U. –, , , ,  Duñabeitia, Jon Andoni ,  Eggert, Elmar – El Yagoubi, Radouane – Elsen, Hilke  Embick, David , , , 

Eppler, Eva Duran  Evans, Nicholas ,  Fabb, Nigel  Fabri, Ray , , n Feldman, Laurie Beth  Figueiredo-Silva, Maria Cristina  Filiaci, Francesca – Filipovich, Sandra  Fillmore, Charles J. n,  Finkel, Raphael A. , , n, , – Firth, J.R. , nn Fischer, Susan  Folli, Raffaella  Forster, Kenneth I. ,  Fortin, Antonio  Fradin, Bernard  Frank, Anette – Fraser, Norman M. ,  Frauenfelder, Uli H.  Freidin, Robert ,  Friedline, Benjamin E. –,  Fuhrhop, Nana  Funnell, Elaine – Gardani, Francesco  Giegerich, Heinz ,  Giraudo, Hélène , – Gisborne, Nikolas n Goad, Heather – Goldberg, Adele n,  Goldschneider, Jennifer  Goldsmith, John A. ,  Gondan, Matthias – Gouskova, Maria – Grainger, Jonathan , , –,  Green, David  Greenberg, Joseph H. ,  Grosjean, François  Guevara, Emiliano  Gussmann, Edmund  Hahn, Ulrike  Haiman, John  Hale, Kenneth  Hale, Mark  Halle, Morris , –, , , –, , –, , –, , , , , –, , , , , , , , , , ,  Harbour, Daniel  Hargus, Sharon  Harley, Heidi , , –, , , ,  Harris, Zellig , , , , , –, , , –, , –, , , , n, , , ,  Hartmann, Stefan – Haspelmath, Martin n, n, , , – Hathout, Nabil 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

   Haugen, Jason ,  Hawkins, Roger , ,  Hay, Jennifer ,  Hayes, Bruce  Hempel, Carl G.  Henri, Fabiola  Herzog, Marvin  Hilpert, Martin , ,  Hinrichs, Erhard  Hippisley, Andrew , –, , n Hittmair-Delazer, Margarete – Hjelmslev, Louis –, ,  Hockett, Charles , –, , n, , , , , , –, n, , –, –, , ,  Hoeksema, Jack  Hruschka, Daniel J. – Huck, Geoffrey J.  Hudson, Richard –, n, , –, , n, , , – Humboldt, Wilhelm von ,  Hyman, Larry M. , , 



Koester, Dirk –, , –, – Köpcke, Klaus-Michael  Koptjevskaja-Tamm, Maria  Koskenniemi, Kimmo – Kossmann, Maarten  Kostić, Aleksandar  Koutsoukos, Nikos  Kouwenberg, Silvia – Koziol, Herbert  Kratzer, Angelika  Krieger, Hans Ulrich –,  Kwon, Nahyun 

Jackendoff, Ray –, , , –, , –, , –, –, –, , , , , ,  Jakobson, Roman –, , , – Janda, Richard ,  Janse, Mark  Johanson, Lars  Johnson, Rebecca L.  Johnston, Trevor A.  Jones, Caroline  Joos, Martin ,  Joseph, Brian D. ,  Juhasz, Barbara J.  Jurafsky, Daniel 

Labov, William  LaCharité, Darlene – Lakoff, George ,  Lander, Yury  Lapointe, Steven , ,  Lardiere, Donna –,  Lasnik, Howard ,  Lees, Robert , , , , –, , –, ,  Lefebvre, Claire , , ,  Léglise, Isabelle  Lemhöfer, Kristin – Leminen, Alina  Lensink, Saskia E. – Levelt, Willem J.M. , , ,  Levi, Judith N. , – Levy, Yonata  Libben, Gary  Liddell, Scott K. ,  Lieber, Rochelle , , , , , , –, , , , , ,  Lieberman, Moti  Lillo-Martin, Diane  Lin, Yuh-Huey  Longtin, Catherine-Marie  Luzzatti, Claudio – Lyons, John 

Kaczer, Laura  Kager, René  Kaisse, Ellen  Kanerva, Jonni M.  Karanastasis, Anastasios  Karatsareas, Petros , n Kathol, Andreas ,  Katz, Jerrold J. – Kay, Paul n,  Keller, Jörg  Keller, Rudi  Ketrez, Nihan  Keuleers, Emmanuel  Keyser, Samuel Jay  Kiparsky, Paul ,  Koefoed, Geert  Koenig, Jean-Pierre 

MacWhinney, Brian  Maiden, Martin , , ,  Malouf, Robert n,  Manova, Stela  Marantz, Alec , , n, , –, , –, , , , ,  Maratsos, Michael P.  Marelli, Marco – Marshall, Chloe – Marslen-Wilson, William , – Martinet, André n,  Maslen, Robert J.C.  Mathesius, Vilem  Mathur, Gaurav ,  Matthews, Peter H. , n, , –, n, n, n, , , n, , –, –, , , , –, , –

Indefrey, Peter , ,  Inkelas, Sharon , –, , – Isac, Daniela 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



  

Mayerthaler, Willi  McCarthy, John J. n, , , , –, , , n, –, – McCawley, James – McCormick, Samantha F.  Mchombo, Sam ,  McKinnon, Richard  Meier, Richard P.  Meir, Irit , –,  Meisel, Jürgen  Melissaropoulou, Dimitra n Merlini Barbaresi, Lavinia  Meunier, Fanny  Michaelis, Laura  Miller, Philip , –, ,  Mohanan, Karuvannur P.  Monsell, Stephen  Morpurgo Davies, Anna  Morris, Joanna  Morton, John  Moscoso del Prado Martín, Fermín , ,  Moshi, Lioba  Mulder, Kimberley – Müller, Gereon  Müller, Peter O.  Müller, Stefan , , ,  Munske, Horst Haider  Murrell, Graham A.  Nakipoglu, Mine  Nakisa, Ramin C.  Namer, Fiammetta  Neeleman, Ad , –, , – Nerbonne, John –,  Nesset, Tore  Nevins, Andrew  Newmeyer, Frederick J. , –,  Nichols, Johanna , ,  Nida, Eugene A. , , , , , ,  Niepokuj, Mary K. ,  Nietzsche, Friedrich  Nikolaeva, Irina , –n Niño, María-Eugenia , ,  Noyer, Rolf , , ,  Ó Séaghdha, Diarmuid  O’Connor, Mary Catherine  O’Neill, Paul  Orgun, Cemil  Osterhout, Lee  Otero, Carlos  Otoguro, Ryo  Padden, Carol – Paster, Mary ,  Pavlakou, Maria  Pawley, Andrew 

Peirce, Charles S. –, , ,  Penke, Martina – Perea, Manuel  Perfetti, Charles  Pesetsky, David n,  Pienemann, Manfred  Pinker, Steven , , , , , –,  Pizzuto, Elena  Plag, Ingo , , , , , , , , n Plénat, Marc  Pollard, Carl J.  Postal, Paul M. –, , ,  Prince, Alan S. , –, , , n, –, –,  Pulleyblank, Douglas n, ,  Punske, Jeffrey  Pylkkänen, Liina  Radkevich, Nina  Rainer, Franz ,  Ralli, Angela ,  Rastle, Kathleen , , – Rathmann, Christian ,  Rayner, Keith  Reape, Mike , n Reiss, Charles  Renouf, Antoinette  Riddle, Elizabeth  Riehemann, Susanne Z. – Ritter, Elizabeth  Roark, Brian  Robbeets, Martine  Robins, Robert H. , , ,  Roché, Michel  Rodriguez-Fornells, Antoni – Roelofs, Ardi , – Roeper, Thomas , –, n Rohlfs, Gerard  Rösler, Frank – Round, Erich R. ,  Rubach, Jerzy  Sadock, Jerrold M.  Sag, Ivan n, , , –, , ,  Samuels, Bridget D.  Samvelian, Pollet ,  Sandler, Wendy , ,  Sandra, Dominiek , ,  Sapir, Edward , –, –, , , , ,  Saussure, Antoine de n Saussure, Ferdinand de , , –, , , , , –, ,  Saussure, Louis de n Saussure, René de , –,  Savini, Marina  Scalise, Sergio , , , , , , –,  Schembri, Adam , 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

   Scherer, Carmen , ,  Schlegel, Friedrich von  Schreuder, Robert n, , –,  Schwager, Waldemar – Schwartz, Bonnie D. – Segond, Frédérique ,  Segui, Juan  Seiss, Melanie  Selkirk, Elisabeth O. , , , , , , , , n Sell, Fabíola Ferreira Sucupira  Sells, Peter n,  Semenza, Carlo – Seuren, Pieter  Siegel, Dorothy , , ,  Siegel, Jeff  Siegel, Muffy , – Silva, Renita , ,  Simpson, Jane  Sims, Andrea D.  Skalička, Vladimir ,  Skousen, Royal  Slobin, Dan  Smolensky, Paul  Smolka, Eva , – Sorace, Antonella – Spencer, Andrew n, , , , , –, , –, , , –n,  Sproat, Richard  Sprouse, Rex A. – Sridhar, Shikaripur  Stampe, David  Steele, Jeffrey – Steele, Susan ,  Štekauer, Pavol , ,  Stephany, Ursula  Stewart, Thomas , ,  Stolz, Thomas  Stump, Gregory , , , , , n, , , –, , , n, , , , , , , , , –, n, n, , , , ,  Svenonius, Peter , , , , , ,  Syder, Frances H.  Szczerbinski, Marcin  Taft, Marcus , ,  ten Hacken, Pius ,  Thomason, Sarah  Thornton, Anna M. , n, , , ,  Toman, Jindrich  Torrego, Esther n,  Trager, George L. , ,  Traugott, Elizabeth Closs , –



Triantaphyllides, Manolis  Trnka, Bohumil , ,  Trommer, Jochen ,  Trousdale, Graeme , – Trubetzkoy, Nikolay S.  Trudgill, Peter n Tseng, Jesse – Tsiamas, Thanasis n Tsimpli, Ianthi Maria  Tyler, Lorraine Komisarjevsky – Ullman, Michael T. , , , , , ,  Urbanczyk, Suzanne  Ussishkin, Adam P. –, – Vachek, Josef  Vainikka, Anne , t van der Hulst, Harry  Van der Lely, Heather , – van Heuven, Walter J.B.  Van Marle, Jaap ,  van Wijk, Judith  Veselinova, Ljuba  Villoing, Florence – Voeikova, Maria D.  Wang, Min  Weinberger, Stephen  Weinreich, Uriel  Weist, Richard M.  Wekker, Herman  Wescoat, Michael  White, Lydia –, – Wiese, Richard  Wilbur, Ronnie B.  Williams, Edwin n, , , –, , , , –, n, –, n Wunderlich, Dieter , n, n Wurzel, Wolfgang U. , , , , , – Young-Scholten, Martha , t Yu, Alan C.L. –, n Yvon, François  Zaenen, Annie – Zamparelli, Roberto  Zeshan, Ulrike – Zimmer, Karl  Zipf, George K. –,  Zirkel, Linda  Zoll, Cheryl –, , – Zwicky, Arnold M. –, –, , –,  Zwitserlood, Pienie , , –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

G ENERAL I NDEX .............................................. Note: f = figure. n = footnote. t = table. a-morphousness , , , –, – see also amorphousness hypothesis abbreviations ,  abstractive (theory) ,  distinguished from constructive , , ,  acquisition ,  order of , , – processes of –,  theory of ,  see also first language acquisition; second language acquisition action nouns –, ,  activation , , –, – adaptation –, –,  additivity  violations ,  adequacy conflicting levels of –,  descriptive/explanatory –, ,  see also system adequacy/congruity affix competition , n affix hopping , , ,  affix ordering , , , –, , –, t rules governing –,  variable , –, – affix synonymy , n affix telescoping , – affixation ,  under adjacency  vs. compounding , f, ,  derivational , , –, ,  inflectional , – and neurolinguistics ,  phrasal –,  preference over conversion/subtraction  semantics of – and sign language , –,  see also prosodic affixation affix(es) , –,  categorization – in Cognitive Grammar – in Construction Morphology , , –,  distinguished from other lexical items , –, n, , , , – (see also signs)

double marking – in Generative Grammar , , ,  lexicalist analysis –, –, – morphophonological rules – negative  as parts of constructions  phonological variations – polysemous – portmanteau ,  positioning , – problems of definition – productivity – relationship with base see under base Structuralist analysis – substitution  substratal , –, , – superstratal , , –, – zero (null) , , – see also affix competition; affix hopping; affix ordering; affix synonymy; affix telescoping; affixation; blocking; clitics; floating affixes; prosodic affixation; rival affixes affixoids , , – age of first-language learners n, ,  in second language acquisition ,  agglutination –, ,  ‘agglutinative ideal’  vs. flexion  see also agglutinative languages agglutinative languages , , , , ,  Agree , , –, – agreement , –, , , , –, – Agreement Number  in CS – domain of – features n, – in sign language – see also gender; number Aktionsart ,  alignment  constraints , , –, – misplaced  prosodic –, 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  allomorphy , , , , , , , –,  in child language  in creole languages ,  in Distributed Morphology (DM) – f-morpheme vs. l-morpheme – morphomic (opaque)  phonologically/grammatically conditioned  see also stem allomorphy alternation, phonologically opaque  see also stem alternation amorphousness hypothesis –, , ,  analogical change  analogy , , –, –, ,  paradigmatic  proportional , , , ,  analyzability , , –, f,  degrees of f,  anaphoric islands  animacy –, n anti-periphrasis t anti-separatism – aorist tense/stem – aphasia , , , –, – applicative , ,  arbitrarism – argument structure , , , n derivation  morpholexical operations , ,  artificial neural networks ,  aspect, place in affix order , , t assembly  attention, in language acquisition  attribute(s) , n ordering –, – Autolexical Syntax  autonomous morphology ,  autonomy/dependence –, n, f,  auxiliaries –, –, – axioms , –, ,  base-definition , , , – Base Reduplicant Correspondence Theory (BRCT) , , , , , – base(s) relationship with affix –,  behavioral experiments , –, ,  Behaviorism , – bilingualism , –, –,  binarity , –, , , – constraint  Binary Branching Hypothesis – binyan(im) , –,  biuniqueness , , –, , , –, ,  deviations from , –, t,  blends, in sign language – blocking , , –, –, , , ,  defined –



distinction between types n and Distributed Morphology (DM)  lexical  morphological, principle of – and network morphology – (possible) non-existence , ,  and Word Grammar  Boolean lattice –, f, f borrowing , – lexical – bound roots , ,  bracketing paradox(es) , , , , , – brain(s) activation – damage –, – electrophysiological studies –, – information storage ,  neuropsychological studies –, –, – regions , , –, f relationship with language functions – and second language acquisition , ,  branching –, – Broca’s aphasia ,  building blocks, metaphor of , –, ,  canon, establishment of – canonical values, identification of – domain, identification of – parameters of variation, identification of – sample space, extrapolation of – canonical inflection , –, , , – deviations from –, t,  Canonical Typology , , – application to derivational morphology – application to inflectional morphology  axioms , –, ,  canon, establishment of see under canon criteria –, –, – future directions – Precept of Independence – see also canonical inflection case stacking , , – case(s) , ,  assignment ,  inflection – locative  marking , , , , –, , , , , –, ,  oblique forms  partitive –, – see also case stacking; default(s); local cases categorization –, f, – causativization ,  Cell Interface Hypothesis ,  cells see under content cells; form cells

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

children –, – age of language acquisition n, ,  atypical language acquisition – and irregular verbal/nominal forms – past tense inflections – second language acquisition  circumfixation  classifier constructions, in sign language –, , ,  clitics , , , ,  distinguished from affixes –,  doubling/left dislocation  in Head-driven Phrase Structure Grammar (HPSG) – in Lexical-Functional Grammar (LFG) – second-position  and Word Grammar , , –, – closure problem  co-index(ation) –, , , –, , –,  co-phonology  coercion n,  cognitive abilities – cognitive endowment  Cognitive Grammar , , –,  fundamentals –,  morphemes – morphological structures – unification – cognitive psychology – commission, errors of –, – commutation class  competition  for activation ,  between constructions , – completeness –, n, – complex categories , – complex symbols –,  complex words –, , , – and first language acquisition – and neurolinguistics –, –, –,  psychological processing –, –,  (re-)motivation  whole-word representation , – complexification ,  complexity, morphological –, n,  Complexity Based Ordering  compositional/noncompositional semantics , , , ,  see also compositionality compositionality , –, ,  correlation with frequency  structural vs. functional – see also non-compositionality compounds/compounding –, , –, –, , – vs. affixation , f, ,  argumental –

vs. blending  in Cognitive Grammar – in Construction Morphology –, –,  coordinative  and CS/Minimalism –, – endocentric/exocentric – fixed-order vs. free-order  linking elements  markers ,  neurolinguistic approaches to –, –, – patterns –,  vs. phrases –,  and psycholinguistics – and second language acquisition – and sign language , –,  synthetic  ternary  variations in , –, – see also phrasal compounds comprehension and neurolinguistics , –, , ,  and second language acquisition –,  computation in children –, ,  internal, in Chomskyan theory , , , ,  minimization – optimal , n,  vs. storage , , , , , n, –, , ,  see also computational linguistics computational linguistics , , , , – hierarchical lexica – morphological induction – rule induction – theoretical background – word coding – see also finite state automata; finite state transducers see under Lexical-Functional Grammar (LFG); Merge Computational System (CS) – application to Minimalism – feature-free  features as part of – relationship with inflection – removal of Agree – and spell-out – concatenative ideal –,  defined  violations –, – concatenative morphology –, –, , , , ,  vs. non-concatenative , , , –, n, , , – operations – see also non-concatenative morphology/processes concord , n,  conditional entropy – Low Conditional Entropy Conjecture –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  configurational accounts ,  conjugation classes n, –, – in creoles – see also inflection(al) class(es) consonant mutation ,  constraints ,  in Construction Morphology – on Word Formation Rules (WFRs) , ,  see also alignment; faithfulness constraints; markedness constructicon n,  Construction Grammar , n, n, , , n links with Construction Morphology ,  Radical  and Relational Morphology ,  Construction Morphology , , , –, – and creoles n, , ,  evaluation – morphological processes – relationship with Network Morphology – relationship with Relational Morphology , , ,  sign-based – special issues – theoretical tools – Constructional Iconicity, Principle of –,  constructional idioms ,  constructional properties –,  constructional requirements  constructionalization ,  constructions – abstract , n competition between – defined  distinguished from constructs n hierarchy of  multi-word – (see also multi-word expressions) range of – semi-specified , – verb-particle –, n see also paradigmatic relations constructive (theory) , –,  distinguished from abstractive , , ,  constructs –, n content cells –, –, t, ,  content paradigm(s) –, –, f, , –, t, t, ,  contiguity  violations –, –,  convergence , – conversion , , ,  masculine-to-feminine – noun-to-adjective  noun-to-verb  and reduplication 



coordination  Copenhagen school – corpora , , ,  historical/modern , – correspondence  creoles , – derivation – inflection – morphology – reduplication (full) – semantic opacity –, – stems , – word structure – see also substrate languages; superstrate languages criteria in Canonical Typology –, –, – cumulation  DATR (knowledge representation language) , , n, , , – decomposition , –, – evidence for – and neurolinguistics –, ,  theories/controversies –,  Deep Structure –, ,  transformationalist approaches – default inflection , , –t, ,  default inheritance , , , –, –, , –, –,  hierarchies –, f, , – multiple , – networks  default(s) , , –, –, , , –, , , –, ,  exceptional case , –, t extension  normal case , , –, t defectiveness , , –, –, ,  in derivational paradigms – definiteness  dependence see autonomy; segmental dependence dependent-marking –,  deponency , , –, , , , n, t, n, ,  relationship with syncretism ,  derivation in creole languages , – deverbal  distinguished from inflection see under inflection domain of – and lexical semantics – in Lexicalist framework – prototypical –, n variations in – see also affixation, derivational; derivational morphology; derivational paradigm(s); wordformation; Word Formation Rules

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

derivational morphology , –, n, , , – basic principles  and Canonical Typology , – and creoles , – distinguished from inflectional see under inflectional morphology and first language acquisition  lexical processes –,  and neurolinguistics ,  and Paradigm Function Morphology (PFM) – and second language acquisition – and Word Grammar , ,  derivational paradigm(s) , –, – linked to inflectional –,  descriptive linguistics , n, , ,  determinism see Pāṇinian determinism Developmental Language Disorder (DLD) see Specific Language Impairment diachronic change , , , , , – and Network Morphology – diachrony , n,  and Network Morphology , – diagrammaticity , , – in Natural Morphology –, – preference for , ,  dialects, variations in –, n, – dictionary/ies data, use of  distinguished from lexicon(s)  frequency (Russian)  Halle version , –, ,  sign language  discovery procedures , ,  discriminative learning , , –,  distinctiveness see unambiguity Distributed Morphology (DM) , , , , –, , –, –, n background – basic structure f derivation vs. inflection – f- vs. l-morphemes – future directions  idiosyncratic rules – interface with phonology  and Item-and-Arrangement theory , – and Minimalism , , ,  model – morphological features – morphological issues – as realizational model – rejection of Lexicalism – relationship with Construction Morphology , ,  underspecification – ‘vocabulary’  distributional class(es) –, –, –

distributional method , , , n distributives ,  disyllabicity constraint , ,  domain(s), in Canonical Typology agreement – derivational – graphic representations f identification of –, – inflection ,  structures within  dual mechanism approach first language –, – second language  dual-route processing –, n, , n, –, – and neurolinguistics , , , – dynamic memories , , – dynamicity , ,  dyslexia deep – neglect  Early Generative Grammar , –, –,  general theoretical issues – legacy – Lexicalist Hypothesis – and the lexicon – and morphology – objections to  Transformational , –, –,  economy , ,  see also language economy principle; paradigm economy principle economy of expression, principle of  edge feature ,  ELAN (Early Left Anterior Negativity)  electroencephalography (EEG) , –, – electrophysiological experiments on compound comprehension – on derived word comprehension – on morphological violations – ellipsis , ,  ‘elsewhere form/rule’ , , ,  emergent properties , , – encoding , –, – phonological  endocentricity , , –, –,  entrenchment –, f, ,  degrees of –, ,  entropy , , – Maximum Entropy classifier –,  measurement  reduction –,  see also conditional entropy episememes – ERP (event-related potential) –, , –, , –, –, , , , – evaluative morphology , , , 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  event structure see argument structure exemplar-based approach , ,  exhaustivity –, – existing words  development of new words from –, , ,  generalizations over ,  identification ,  resegmentation  exocentricity , , , –, – experimental research , , n,  see also behavioral experiments exponence affixal  in creoles , – cumulative vs. separatist  different types of –, , – discontinuous – distributed , f gestalt – multiple/extended , , , –, –, , –, –,  non-segmental  in portmanteau words – rule(s) of –, –,  secondary – exponents –, , –, –, – idiosyncratic restrictions – of negation  ordering  primary/secondary  of Word Formation Rules (WFRs)  externalization ,  extra-morphological motivation  extra-morphological properties –, t faithfulness constraints , , – feature structures , , , , , , – feature(s), in (Canonical) Typology – centrality to approach  feature–value combinations , –,  inflectional , – intersections –, ,  morphosemantic  see also feature structures; morphosyntactic features feminine nouns, formation of – finite state automata , , – defined  word building with , f finite state transducers , –, f general features – limitations  first language acquisition –, – atypical – complex words – creoles – distinguished from second-language acquisition , –, –



inter-lingual comparisons – and linguistic theory – overregularization – past tenses – see also children fission , –,  flexion  floating affix(es) , – fMRI (functional magnetic resonance imaging) –, –, , ,  form cells –, –, , –,  form correspondents –, –, t form–meaning relationship , – mismatches , , ,  pairings  form paradigm(s) –, , f, –, –, t, ,  formalism , , , –, ,  DATR ,  feature structure  formalist theory/ies , , ,  phonological vs. syntactic  popularity ,  rejection ,  and word formation rules – framework-free grammatical theory  frequency –,  family  and first language acquisition , –,  high vs. low , – lexical ,  and neurolinguistics –, –,  word  see also morpheme(s); whole-word frequency full entry theory , , , , , – and first language acquisition  full-listing models ,  full-parsing models ,  Full Transfer/Full Access model – fully inflected form(s) , , , , ,  and Word Grammar –, f, –, f, f, ,  Functional Hierarchy  functionalist theory/ies , , , ,  and language change  and Natural Morphology , ,  function(s), in Cognitive Grammar –, , – symbolizing – fusion , –, t, , t, , ,  non-appearance  see also fusional languages fusional languages , , –,  see also semi-fusional languages gender agreement –, , , –,  and Canonical Typology –, –, n

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

gender (cont.) identification – inflection , , , , – lexical stipulation – syncretism , , , n variations – see also feminine nouns; masculine nouns; neuter nouns generalized referral , , , – Generalized Template Theory (GTT) – hierarchy-based – morpheme-based – generalized verbs  Generative Grammar –, –, , –, ,  emergence  see also Early Generative Grammar; Later Generative Grammar Generative Semantics , , –, –, ,  geographical features, names for – gerundive nominals , –, – Gestalt  see also exponence Glossematics –, , n grammatical categories , , ,  conceptual characterization n derivation  error-proneness (in second language)  grammatical concepts –, –,  grammatical features , , ,  grammatical processes , , –,  grammaticalization , n, , , –,  in Construction Morphology  hapaxes ,  hapax legomenon ,  head , , – marking , – see also headedness Head-Driven Phrase Structure Grammar (HPSG) , , –, n, , –, –, , ,  inflection in – interface with syntax – morphological processes  overview – word formation – headedness , , , , –, ,  in Greek – and neurolinguistics – and non-compositionality – see also left-headedness; right-headedness hemodynamic experiments –,  heteroclisis , , , t, – hierarchical lexica , –, , , , –,  hierarchies, in Construction Morphology , , , f, f,  ‘open’ vs. ‘closed’ ,  see also constructions; default inheritance; hierarchical lexica; inheritance

holistic approaches ,  Homogeneity Hypothesis  homonymy , , t homophony , ,  accidental ,  hypocoristics , , – iconicity , –, – and meaning –,  see also Constructional Iconicity, Principle of icons types of – Identity Function Default  idiomaticity , – idioms ,  see also constructional idioms images , – imperfect tense, variations in formation – see also mediopassive imperfect tense implicational relations –, – implicative rules , – impoverished entry theory – distinguished from full-entry , , , ,  and first language acquisition  problems of ,  impoverishment –, , , ,  rules  incorporation  noun , n, ,  in sign language  incremental theories (of morphology) n,  distinguished from realizational , , –, , –, , –,  distinguishing features  see also inferential-incremental models; lexicalincremental models Independence, Precept of – index, definition of  indexicality , – inference default , , , , , n rules/chains of , , , – inferential-incremental models , n,  inferential-realizational models , –, , , n, , , – and Network Morphology , ,  and Paradigm Function Morphology (PFM) , – inferential theories (of morphology) –,  cross-matching with other forms , –, – vs. lexical –, , – see also inferential-incremental models; inferentialrealizational models infinitival forms/markers n, – children’s use of  infixation , –, , , , –, , 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  inflection –,  and child learning  contextual vs. inherent , , , n in creole languages – distinguished from derivation , , –, –, , –, –, –,  domain , ,  in Head-driven Phrase Structure Grammar (HPSG) – in Lexicalist framework – in Lexical-Functional Grammar (LFG) – prototypical –, n Structuralist analysis – variations in – see also affixation, inflectional; inflectional categories; inflectional class(es); inflectional morphology; inflectional paradigm(s) inflectional categories , n, ,  inflection(al) class(es) , , , , –, , , , , , , , ,  conjugation classes  in Greek –, ,  in Italian , , –,  in Nuer , t, , –, t inflectional morphology –, – basic units – and Canonical Typology ,  concatenative vs. non-concatenative – creoles and – defined – degree of autonomy – distinguished from derivational –, –, , , , –,  and neurolinguistics , ,  and second language acquisition – structures – Word Grammar and , ,  inflectional paradigm(s) , –, –, –, –, –,  canonical –, – in creoles , ,  delimitation  form vs. content – linked to derivational –,  information theory , f, , , n, –, , – inheritance ,  global ,  hierarchies –, , , ,  impoverished vs. full entry –,  Instantiation Inheritance Links ,  local  multiple , , , – networks , , –, –, –, – problems  without inherent directionality – see also default inheritance



integrating element(s) – integration strategies – interfaces , –, n, , –, ,  and Distributed Morphology (DM) ,  feature  formative  and Minimalism ,  morphology-lexicon –, – morphology-phonology , –, , , , , , , , – morphology-pragmatics  morphology-semantics , –, , ,  morphology-syntax , –, , , –, –, , –, , – see also Cell Interface Hypothesis internal stem change , ,  interpredictability  introflectional language  ion morphs , , –, , ,  irregular words , n and first language acquisition –, – and neurolinguistics – irregularity , , , , , , ,  isolating languages , ,  isomorphy , , , ,  Item-and-Arrangement (IA) theory , , , –, , –,  contrasted with Item-and-Process (IP) –, , , –,  and Distributed Morphology (DM) , –, –,  Item-and-Pattern approach see Word and Paradigm Morphology Item-and-Process (IP) theory , , , –, –, – contrasted with Item-and-Arrangement –, , , –,  kernel sentences , ,  LAN (Left Anterior Negativity) , –, ,  language acquisition , , –, , , ,  see also first language acquisition; second language acquisition language change , , , , –, ,  grammatically vs. extra-grammatically initiated  processes – language contact , , , –, , – and compound variation – and derivational variation – and inflectional variation – language decay/death  language economy principle  language processing ,  declarative vs. procedural , , ,  and neurolinguistics –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

language processing (cont.) second-language , , – see also morphological processing language production and neurolinguistics , , – and second language acquisition –, –, – language states – language tags – language types , ,  language typology –, – language universals  late-insertion theory , – see also realizational theories Later Generative Grammar –, – layered morphology – layering ,  left-headedness , –, –,  lemmatization  Lexeme-Morpheme Base Morphology (LMBM) , – lexemes  creation –, , –, – defined –, ,  insertion – phrasal – problems of definition – lexical access , ,  lexical conceptual structures  lexical decision task(s) , , –,  lexical enrichment – Lexical-Functional Grammar (LFG) , , –, , –, –, , ,  computational implementation ,  inflection in – interface with syntax – morphological processes – overview – word formation – lexical-incremental models ,  lexical insertion , –, , ,  lexeme – morpheme-level view –,  transformation – word-based – lexical integrity , , –,  and Lexical-Functional Grammar (LFG) –,  Principle of , ,  see also Lexical Integrity Hypothesis Lexical Integrity Hypothesis , –, , , , ,  challenges to – Lexical Phonology and Morphology –, ,  lexical-realizational models , –, , –,  lexical redundancy reduction  rules –, , ,  Lexical Relatedness Morphology (LRM) – lexical relations , , , , , 

lexical rules , –, , , , ,  lexical semantics , , –, , ,  derivation ,  distributional  lexical theories (of morphology) vs. inferential –, , – see also lexical-incremental models; lexicalrealizational models Lexicalism –, , – conditions on morphological rules – and derivation  early models  head, concept of – historical background – legacy – lexicon, concept of – minimal lexical units – rejection – terminology n words, conception of – see also Lexicalist Hypothesis Lexicalist Hypothesis –, , , , ,  Chomsky’s formulation , –, , ,  elaborations – Generalized ,  objections to  Strong vs. Weak ,  theories modelled on , , –,  transformationalist approaches – Lexicalist Morphology see Lexicalism lexicalization , , –, –,  cultural determination  lexicon cognitive , – constructionist – distinct meanings  lexicalist – listing – relationship with morphology – see also constructicon; dictionary; hierarchical lexica lexicon-grammar continuum , –,  lexicon-syntax continuum n liaison ,  linearization , , –, , – linguistic form(s) , –,  defined – units of  linguistic typology –,  listedness ,  see also storage listemes , , –, , ,  loanblends – loanwords ,  truncations , – local cases – locus of marking – London School 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  machine learning , – see also supervised machine learning; unsupervised machine learning markedness –, , –,  asymmetric  constraints , –,  reductions –, – masculine nouns –,  variations in feminization – mathematics, Canonical Typology linked to –, – maximality –, ,  meaning-free lexical items n mediopassive imperfect tense – memory –,  long-term n,  retrievals from  structure of  see also memory-based learning; storage; working memory memory-based learning – mental lexicon , ,  and blocking – and neurolinguistics , ,  and psycholinguistics –,  and second language acquisition , – workings of , ,  Merge , , , ,  computational system –, , , , – metaphor ,  Minimalism , , , , , , –, ,  application of CS principles – defining features – relationship with Distributed Morphology (DM) , , n, ,  see also Computational System minimality , , –, ,  minimum description length , ,  Mirror Principle ,  mismatches , , ,  and Construction Morphology –, , – form-content –, , ,  number-verb form  modality ,  and sign language , ,  modular approaches , , , , ,  see also non-modular approaches mood, verbal , , ,  morpheme alternant(s) , –, , , –,  morpheme-based theories , , , –,  critiques , n,  and Distributed Morphology (DM) , –,  and Generalized Template Theory (GTT) – vs. word-based –, , , n, , –, , 



morpheme units –, – morpheme(s) , –,  additive  assumptions about – classified list ,  in Cognitive Grammar – defined , –, , –, , n discontinuous –, –,  ‘early’ vs. ‘late system’  free n frequency effects – functional vs. lexical (f- vs. l-) – lexical entries  Lexicalist view of – listing , ,  monosyllabic  positional effects – problems of definition , ,  representation – sequences , ,  structural analyses – subtractive ,  see also morpheme alternant(s); morpheme-based theories; morpheme units; submorpheme; templates; zero morphemes morphemics , , ,  morphological ability , , – absence of – components – defined  morphological decomposition see decomposition Morphological Doubling Theory (MDT) – morphological organization –, , – morphological processes –, –,  classification  in Distributed Morphology (DM) , , –, , – representation , – morphological processing –, – models – see also comprehension; language production morphological rules , , , ,  conditions on – in Distributed Morphology (DM) ,  Lexicalist , , , –, – morphological structure – and neurolinguistics –, –, – and psycholinguistics , , , – morphological typology – paradigmatic dimensions – syntagmatic dimensions – word, notion/problems of – see also form–meaning relationship morphological violations electrophysiological studies – neuropsychological studies – morphology

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

morphology (cont.) ‘by itself ’ , ,  extra-grammatical  and Generative Grammar – horizontal – layered vs. template – misrepresentation of content – overrepresentation of content  processes of – regular vs. irregular , ,  relationship with other fields –, – relationship with syntax see interfaces; syntax and Structuralism –, , –,  sub-parts – subtractive , , , – underdetermining of content – units of analysis – vertical – see also autonomous morphology; Construction Morphology; derivational morphology; Distributed Morphology (DM); inflectional morphology; Lexicalism; morphological ability; morphological rules; morphological typology; morphological violations; pleonastic morphology; Relational Morphology; theories of morphology; two-level morphology morphomes –, , , , – morphomic property/ies , ,  shared –,  morphophonemics , ,  morphophonemic rules ,  morphophonology f, , –, –, ,  morphosemantics f,  morphosyntactic features f,  canonical , – of creoles –, – ordering , –, – relationship of number and case –, –, – separation from form  and Word Grammar n, – see also under agreement morphosyntax , f, –, , –, –, –, –, , ,  relationship with inflectional exponents – and sign language – see also morphosyntactic features morphotactics , , , , , –, , , , ,  and transparency –, ,  morphs combination  cranberry  defined ,  empty/zero –,  variable ordering – see also ion morphs

motivation defined  degrees of  partial  in Relational Morphology –, ,  movement, syntactic  multi-word expressions , , , –, – multilingualism  multivariate typology ,  N effects , –, –, – Nanosyntax , , , – Natural Morphology (NM) , , – background – cognitive roots – functionalist nature  future directions – semiotic basis – strategy – system-dependent naturalness – Natural Phonology – naturalness , – parameters  system-dependent – universal –,  neo-constructionism , – network, constructionist , ,  Network Morphology , , , –, ,  case studies – computational implementation ,  framework – relationship with other theories – Network Postulate ,  neural networks n artificial ,  deep  neuroimaging –,  neurolinguistics , , , –, , – brain activation – case studies see behavioral experiments; electrophysiological experiments; neuropsychological patients defined – future directions  and morphological processing – neuropsychological patients –, , – neuter nouns  pluralization – niche, productive  nicknames , , , , ,  No-Blur principle  No-Phrase Constraint , , , – node realignment ,  nominalization(s) , , , –, –, ,  derived  rules of , –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  syntactical vs. lexical –, ,  non-compositionality  and headedness – non-concatenative morphology/processes , , , , , , , ,  (variety of ) definitions n,  see also concatenative morphology non-configurational languages ,  non-existing words ,  predictable meanings  and second language acquisition  see also blocking non-iconic forms/coding –, –,  non-manuals , –, ,  non-modular approaches , , ,  non-productive patterns  non-word bases see bound roots noncompositional semantics see compositional/ noncompositional semantics norms/normalcy  nouns see feminine nouns; incorporation; masculine nouns; neuter nouns; nominalization; pluralization; professional nouns number , – agreement –, , , –, – and Canonical Typology –, – inflection –, , , –, , –, , – markers –, ,  see also pluralization numerosity , , , – obligatoriness t ‘Occam’s razor’ see parsimony, principle of omission, errors of –, ,  opacity referential  semantic –, – optimal word shape – Optimality Theory (OT) –, , , , , –, n,  constraint types – contribution to prosodic morphology – unresolved issues – ‘Orwell’s problem’ – output-orientation , ,  overabundance , n, , ,  overdifferentiation t overgeneration , ,  overlap –, –, , ,  overregularization, in first language acquisition – overrides, in Network Morphology , –, ,  P effects , ,  palatalization 



Pāṇinian determinism , ,  Paradigm Cell Filling Problem  paradigm economy principle , , ,  Paradigm Function Morphology (PFM) , n, –,  background – basic features – and creoles , – future prospects – illustrative examples – and interfaces – and lexeme formation – PFM ( version) – PFM ( version) – relationship with other theories , , , –, , , , , , ,  paradigm structure conditions  paradigmatic levelling – paradigmatic relations , , , –, ,  paradigmatics –, , n paradigm(s) abstract – and Canonical Typology see inflectional paradigm(s) and computational linguistics –, –, –,  in Construction Morphology see paradigmatic relations; word-formation, paradigmatic exemplary  in Head-driven Phrase Structure Grammar (HPSG) , n and markedness reduction – and morphological typology – nominal  (problems of ) definition – typological – verbal –,  see also content paradigm(s); derivational paradigm(s); inflectional paradigm(s); paradigm economy principle; Paradigm Function Morphology Parallel Architecture , , n, , , –, – parseability ,  parsimony, principle of (‘Occam’s razor’) n,  part-whole relations –, – passive voice – past tense(s) , –, , , –, – in first language acquisition – inherited markers  markers –, –, –, , , –, , ,  and neurolinguistics ,  Past Tense Debate –, –, n, ,  synchronic variations , –, – see also aorist; imperfect; preterite-present perfective aspect  periphrasis , , –, –, n, , , t person, verbal inflections/agreements , –, , 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

person, verbal inflections/agreements (cont.) in Distributed Morphology (DM) –, – in Paradigm Function Morphology (PFM) – in Word Grammar – phonæsthemes , , ,  phonemes , , –, , , , , , , n phonological form (PF) , , –,  phonological rules – phonology ,  and first language acquisition –,  and the Parallel Architecture , –, f, –, – prosodic – relationship with morphology  (see also under interfaces) and second language acquisition , –,  and sign language – see also Natural Phonology phonotactics , ,  phonotactic probability , n, – phonotactic rules – phrasal compounds –, , n, , ,  phrasal lexemes  phrasal names  phrase-words , – phrases see compounds/compounding; phrasal compounds; phrase-words picture naming , , – pidgins –,  ‘Plato’s problem’  pleonastic morphology  pluralization –, –, t, – in sign language  subtractive –, ,  polyfunctionality  polysemy , , – affixal – links  polysynthetic languages , n,  portmanteau forms –, – affixes ,  morphemes , t, , – morphs  possible words , , ,  enumeration  lexical insertion – potential words ,  Prague Circle –,  pre-morphology  predictability  preference(s) , –, –,  for diagrammaticity/biuniqueness –, , , ,  theory of  prefixation –, , –, –,  applicative  hierarchy f

present tense, variations in –, –, – preterite-present verbs , , t preverbs – priming , , , ,  long-lag –, –,  morphological –, , , –,  paradigms , ,  and second-language acquisition , ,  principal parts , , , , , – process morpheme  processing activity –,  see also dual-route processing; language processing Productive Non-inflectional Concatenation  productivity –, , , , , –, t, t and Cognitive Grammar – differential – and diachronic change – and Distributed Morphology (DM) – and Generative Grammar – and Natural Morphology –,  and Relational Morphology – relationship with compositionality – relationship with morphotactic transparency  variations in , – and Word Grammar  see also semi-productivity professional nouns – profile(s) , , n profiling  property mapping –, ,  prosodic affixation , – prosodic hierarchy , –,  Prosodic Morphology – academic significance – morphological issues – role of Optimality Theory – prosodic subcategorization – Prosodic Transfer Hypothesis (PTH) ,  prosodic well-formedness n,  Prosodic Word  proto-morphology ,  prototype, concept/definition of , , n proximity , , – psycholinguistics , , , , , , –, , , , , , –,  compound words – future directions – morpheme position effect – and morpheme representation – morphological effects  morphological priming – overview  relevant data/phenomena – sublexical vs. supralexical theories  transposed-letter effect –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  reactive effort  reading aloud , – realization , , , – default  see also realization rules; realizational theories realization rules , , , –, –,  in creoles ,  organization – types – realizational theories (of morphology) , , , , –, n, –,  distinguished from incremental , , –, , – distinguishing features –, , –,  and Head-driven Phrase Structure Grammar (HPSG) , – and Lexical-Functional Grammar (LFG)  see also inferential-realizational models; lexical-realizational models realized cell(s) , , –,  realized paradigm(s) , , t reanalysis , – and affix telescoping – examples for morphological theory – and grammaticalization – and resegmentation  recipient language – reciprocals ,  Recoverably Deletable Predicates (RDPs) –,  recursivity , t, t redundancy  see also lexical redundancy reduplication(s) –, –, –, , –, ,  full , – iconic –, ,  partial –, –, , –,  prefixal – in sign language –, , – referral generalized , , –,  rule(s) of –, –, , n, ; rejection – regularity t see also irregular words; irregularity Relational Morphology , , , , , – Relevance, Principle of  Representational deficit –,  resegmentation ,  see also reanalysis residues ,  restrictions see constraints resyllabification  rewrite rules , , –, n, –, ,  right-headedness , –, –, –,  Righthand Head Rule (RHR) n rival affixes n, , ,  see also affix competition



Role and Reference Grammar  root-and-pattern morphology , –, , –, –, –,  alternative terminology n in Semitic , –, n,  root(s)  alternant  and Distributed Morphology (DM) –, – identification ,  root-based models – in sign language –, – see also bound roots; root-and-pattern morphology rule blocks –, , , – rejection  rule induction – rules see exponence; lexical redundancy rules; lexical rules; morphophonemics, morphophonemic rules; phonological rules; realization rules; referral; rewrite rules; rule blocks; rule induction; schema(s); Word Formation Rules sample space –,  sandhi , , ,  scale (of morphotactic naturalness/transparency) , t, , – schema(s) constructional –, –, f, , , –, –, – defined n generative vs. relational functions – output-orientation ,  productive/nonproductive – vs. rules – second-order , , –,  semi-specified , – sister – unification –, f schematicity , , ,  scope, semantic – second language acquisition , , , – in children  compound words – derivational morphology – future research – inflectional morphology – order of acquisition , , – stages of –, t segmental autonomy – violations , – segmental dependence –, – self-repair  semantics – of affixation – and first language acquisition ,  and morphology ,  and Paradigm Function Morphology (PFM) , , 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

semantics (cont.) and the Parallel Architecture , –, f, , –, – relationship with syntax – structuralist view of , ,  and Word Grammar –,  see also compositional/noncompositional semantics; Generative Semantics; interfaces; lexical semantics; morphosemantics; opacity sememes – semi-fusional languages ,  semi-productivity  semiotics –, –,  separability  separable verbal prefixes  Separation Hypothesis , ,  separatist – vs. cumulative  shape conditions ,  sign-based approach/model –,  sign language(s) , , – affixation – classifier constructions –, , ,  compounds – horizontal morphology – morphosyntax – non-manuals , –, ,  and phonology – reduplication – theoretical issues – vertical morphology – sign(s) affixes distinguished from ,  constructions as , n,  morphemes as , , ,  word forms as ,  words not qualifying as – simplification, processes of ,  single-route processing –, n, , n, –, – sister relations – sociopragmatics  sound symbolism –, – Specific Language Impairment (SLI, now DLD) , – speech planning – spell-out , – phrasal – rules of – Split Morphology Hypothesis , –, ,  stem allomorphy , –, –, , , t, –, , –, t, f stem alternation –, , , , ,  stem space , ,  stems in Cognitive Grammar , – in creoles , –

distinguished from affixes ,  distinguished from word forms – introduction rules  in Lexical-Functional Grammar (LFG)/ Head-driven Phrase Structure Grammar (HPSG) , ,  in Natural Morphology ,  in Network Morphology –, ,  in Paradigm Function Morphology (PFM) –, – strong vs. weak  suppletion – variations in formation/adaptation , –, –, – see also internal stem change; stem allomorphy; stem alternation; stem space storage , –, –, n, , – in child language acquisition  vs. computation see under computation minimization – in psycholinguistics –, ,  in second language acquisition , ,  significance in morphological theory  Stratificational Grammar  stress in abbreviations/nicknames  determined by suffix  feet –, , – patterns , , ,  phrasal  primary , , , n shifts , ,  see also syllable(s) structural linguistics ,  structuralism , , –, –, , n American n, –, –, , f antecedents  background – basic issues – European , –, –, f grammatical architecture – influence on later theories ,  and interfaces – (mis)use of term –,  and morphology –, , –,  structures –, – bipolar – component , –,  composite –, –, – formal vs. relational , – symbolic , , – unipolar – sublexical theories  submorpheme  subschemas , –, –, , ,  substitution , –,  affix 

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  classes ,  segmental ,  substrate languages , –, , – subtraction –, , ,  see under morphemes; morphology suffixation , , , –, , , –, , , , –,  and Construction Morphology –, – in creoles , ,  in sign language  suffixes derivational n, –, –, n inflectional  superstrate languages , , , , –, –,  supervised machine learning – memory-based – stochastic modelling ,  suppletion , , , , , , , , t, ,  and Canonical Typology ,  paradigmatic  partial  stem – strong vs. weak , t, ,  supralexical theories ,  Surrey Morphology Group ,  syllable(s) and pluralization – reduplication –, – single ,  stressed , , , , , n structure , ,  unstressed  symbolic structures/links –, , –, – synchronic variation , –, , – syncretism , , , , –, , , , –, –, –, , ,  causes ,  defined  effects  gender , , , n patterns of , , , ,  uninflectability t syntactic category/ies , t syntagmatics , , , – syntax  and first language acquisition ,  generative , , –,  lexicalist –, , – and the Parallel Architecture , –, f, –, – relationship with morphology , , , –, –,  (see under interfaces) relationship with semantics – syntax-based models – see also morphosyntax



synthetic languages  system adequacy/congruity ,  and diagrammaticity – and markedness reduction – tagging  tagmemes – template morphology – templates generalized, morpheme-based , – hierarchy-based , – morpheme-specific , – Temporal Self-Organizing Maps (TSOMs) – tense(s) , , – compound ,  inter-lingual differences –,  marking  mismatches  stem variations – see also past tense text cohesion – text linguistics ,  The Emergence of the Unmarked (TETU) –, , , , , – thematic vowels –,  theories of morphology – aims  history , – stem-based ,  syntactocentric –, – taxonomy –, , –, – see also incremental theories; inferential theories; lexical theories; morpheme-based theory; realizational theories; word-based theory tokenization  tone , , , , n trade names  transformational rules , , , –, , –, – transitivity  transparency –,  morphosemantic ,  morphotactic , –, ,  scales of – semantic –, –, , , , , , –, –,  transposed-letter effect –,  Tree Adjoining Grammar  truncation(s) , , , –, –, , , –,  in Japanese , ,  see also abbreviations two-level morphology – typology see Canonical Typology; morphological typology; linguistic typology U-shaped development – unambiguity –

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi



 

uncertainty see entropy underspecification , , , –, , , , , , , , t unification , ,  in Cognitive Grammar – in Construction Morphology –, , ,  in Relational Morphology ,  uniformity, trend towards , – uninflectability t Unitary Base Hypothesis (UBH)  Unitary Output Hypothesis (UOH)  units, linguistic –, –,  abstraction , ,  combination  emergence/overlap f,  symbolic ,  univerbation  Universal Grammar (UG) , , – features specified by – impoverishing  universals , , ,  unsupervised machine learning – usage-based approach , ,  and Cognitive Grammar , ,  and Construction Morphology – and word formation , ,  valency ,  changes in , ,  value(s) canonical , –, , f, – see also feature–value combinations variables canonical/non-canonical –, –, – in Construction Morphology , , ,  in Relational Morphology –, –, , ,  variants –, , , ,  variation(s) –, –, – assumptions/premises – in compounding – derivational – inflectional – Verbal Theme  verbalizer – violation paradigms – vocabulary – insertion –, f, ,  item , –, , –, ,  mathematical vs. natural language  null items  and roots  second-language ,  use of term in morphological theories , , ,  vowel deletion  vowel harmony , , 

Wernicke’s aphasia ,  whole-word frequency , –, – whole-word representation , – Williams syndrome ,  Word and Paradigm Morphology –, , n, , –, – arguments for – and Construction Morphology , , ,  ‘Item and Pattern’ model – models/classification – and Word Grammar , ,  word-based theories , –, , –, – and Construction Morphology –, –, ,  and Distributed Morphology (DM) – vs. morpheme-based see under morpheme-based theories word coding – word-formation , , –, –, , – aims of theory – changes in – distinguished from inflection – in Generative Grammar (WFGG) , –, , , –,  in Head-driven Phrase Structure Grammar (HPSG) – interface with phonology – interface with syntax – and lexical semantics – in Lexical-Functional Grammar (LFG) – in Natural Morphology , , –, –, – paradigmatic ,  Structuralist analysis – templatic n theoretical/structural analysis – see also blocking; concatenative ideal; headedness; productivity; Word Formation Rules; word-forms Word Formation Rules (WFRs) –, –, , –, , , , –, , –, –, ,  constraints on , ,  word-formedness see analyzability word-forms canonical , n definition/identification , , ,  derivational  relations between  Word Grammar , , – background – basic issues – components – and interfaces – word graphs – computational models –, ,  and second language acquisition –, –, – wordhood, criteria for , , ,  see also word(s), (problems of ) definition

OUP CORRECTED PROOF – FINAL, 27/11/2018, SPi

  word(s) grammatical (morphosyntactic) – lexicalist view of – order  phonological – primary vs. secondary – (problems of ) definition , –, –

working memory , ,  Wug experiment ,  zero-derivation , ,  zero morphemes –, , –,  see also affix(es), zero (null) Zipf ’s Law –



OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

O X F O R D H A N D B O O K S IN LI N G U I S T I C S THE OXFORD HANDBOOK OF AFRICAN AMERICAN LANGUAGE Edited by Sonja Lanehart

THE OXFORD HANDBOOK OF APPLIED LINGUISTICS Second edition Edited by Robert B. Kaplan

THE OXFORD HANDBOOK OF ARABIC LINGUISTICS Edited by Jonathan Owens

THE OXFORD HANDBOOK OF CASE Edited by Andrej Malchukov and Andrew Spencer

THE OXFORD HANDBOOK OF CHINESE LINGUISTICS Edited by William S.-Y. Wang and Chaofen Sun

THE OXFORD HANDBOOK OF COGNITIVE LINGUISTICS Edited by Dirk Geeraerts and Hubert Cuyckens

THE OXFORD HANDBOOK OF COMPARATIVE SYNTAX Edited by Gugliemo Cinque and Richard S. Kayne

THE OXFORD HANDBOOK OF COMPOSITIONALITY Edited by Markus Werning, Wolfram Hinzen, and Edouard Machery

THE OXFORD HANDBOOK OF COMPOUNDING Edited by Rochelle Lieber and Pavol Štekauer

THE OXFORD HANDBOOK OF COMPUTATIONAL LINGUISTICS Edited by Ruslan Mitkov

THE OXFORD HANDBOOK OF CONSTRUCTION GRAMMAR Edited by Thomas Hoffman and Graeme Trousdale

THE OXFORD HANDBOOK OF CORPUS PHONOLOGY Edited by Jacques Durand, Ulrike Gut, and Gjert Kristoffersen

THE OXFORD HANDBOOK OF DERIVATIONAL MORPHOLOGY Edited by Rochelle Lieber and Pavol Štekauer

THE OXFORD HANDBOOK OF DEVELOPMENTAL LINGUISTICS Edited by Jeffrey Lidz, William Snyder, and Joe Pater

THE OXFORD HANDBOOK OF ELLIPSIS Edited by Jeroen van Craenenbroeck and Tanja Temmerman

THE OXFORD HANDBOOK OF ERGATIVITY Edited by Jessica Coon, Diane Massam, and Lisa deMena Travis

THE OXFORD HANDBOOK OF EVIDENTIALITY Edited by Alexandra Y. Aikhenvald

THE OXFORD HANDBOOK OF GRAMMATICALIZATION Edited by Heiko Narrog and Bernd Heine

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

THE OXFORD HANDBOOK OF HISTORICAL PHONOLOGY Edited by Patrick Honeybone and Joseph Salmons

THE OXFORD HANDBOOK OF THE HISTORY OF ENGLISH Edited by Terttu Nevalainen and Elizabeth Closs Traugott

THE OXFORD HANDBOOK OF THE HISTORY OF LINGUISTICS Edited by Keith Allan

THE OXFORD HANDBOOK OF INFLECTION Edited by Matthew Baerman

THE OXFORD HANDBOOK OF INFORMATION STRUCTURE Edited by Caroline Féry and Shinichiro Ishihara

THE OXFORD HANDBOOK OF JAPANESE LINGUISTICS Edited by Shigeru Miyagawa and Mamoru Saito

THE OXFORD HANDBOOK OF LABORATORY PHONOLOGY Edited by Abigail C. Cohn, Cécile Fougeron, and Marie Hoffman

THE OXFORD HANDBOOK OF LANGUAGE AND LAW Edited by Peter Tiersma and Lawrence M. Solan

THE OXFORD HANDBOOK OF LANGUAGE EVOLUTION Edited by Maggie Tallerman and Kathleen Gibson

THE OXFORD HANDBOOK OF LEXICOGRAPHY Edited by Philip Durkin

THE OXFORD HANDBOOK OF LINGUISTIC ANALYSIS Second edition Edited by Bernd Heine and Heiko Narrog

THE OXFORD HANDBOOK OF LINGUISTIC FIELDWORK Edited by Nicholas Thieberger

THE OXFORD HANDBOOK OF LINGUISTIC INTERFACES Edited by Gillian Ramchand and Charles Reiss

THE OXFORD HANDBOOK OF LINGUISTIC MINIMALISM Edited by Cedric Boeckx

THE OXFORD HANDBOOK OF LINGUISTIC TYPOLOGY Edited by Jae Jung Song

THE OXFORD HANDBOOK OF LYING Edited by Jörg Meibauer

THE OXFORD HANDBOOK OF MODALITY AND MOOD Edited by Jan Nuyts and Johan van der Auwera

THE OXFORD HANDBOOK OF MORPHOLOGICAL THEORY Edited by Jenny Audring and Francesca Masini

THE OXFORD HANDBOOK OF NAMES AND NAMING Edited by Carole Hough

OUP CORRECTED PROOF – FINAL, 26/11/2018, SPi

THE OXFORD HANDBOOK OF PERSIAN LINGUISTICS Edited by Anousha Sedighi and Pouneh Shabani-Jadidi

THE OXFORD HANDBOOK OF POLYSYNTHESIS Edited by Michael Fortescue, Marianne Mithun, and Nicholas Evans

THE OXFORD HANDBOOK OF PRAGMATICS Edited by Yan Huang

THE OXFORD HANDBOOK OF SOCIOLINGUISTICS Second edition Edited by Robert Bayley, Richard Cameron, and Ceil Lucas

THE OXFORD HANDBOOK OF TABOO WORDS AND LANGUAGE Edited by Keith Allan

THE OXFORD HANDBOOK OF TENSE AND ASPECT Edited by Robert I. Binnick

THE OXFORD HANDBOOK OF THE WORD Edited by John R. Taylor

THE OXFORD HANDBOOK OF TRANSLATION STUDIES Edited by Kirsten Malmkjaer and Kevin Windle

THE OXFORD HANDBOOK OF UNIVERSAL GRAMMAR Edited by Ian Roberts